The VRML Architecture Group (VAG) has sent out a request for proposal (RFP) for VRML 2.0. Moving Worlds is a proposal which satisfies this request. It is a collaborative effort between a number of individuals and organizations. The goal of the Moving Worlds effort is not only to meet all requirements of the RFP, but to do so in an open forum. Please visit this site often and join the moving-worlds mailing group to keep up with the latest developments.
This page contains links to all documents related to the proposal as well as supporting and related documents and available sample software. Please note that Moving Worlds is an evolving proposal. Throughout the month of March it will be refined and updated with new information. The change log will be updated with the latest changes to the documents. Also, visit our voting booth and vote on issues relating to Moving Worlds. Questions change regularly so visit often.
A FAQ is now available to answer often asked questions specifically about Moving Worlds.
Many individuals and organizations have been involved in the proposal. The work of everyone who has taken the time to contribute in some way is appreciated. Moving Worlds is a tribute to the successful collaboration of all of us.
If you want to print the Moving Worlds proposal, it may be convenient to view the entire proposal as a single HTML document (about 350K). Note that this single-document format is intended for printing only; links in that document are not guaranteed to work.
The Moving Worlds mailing list has been set up for discussions about the proposal. Please subscribe to get involved in the process. To subscribe, send an email message to moving-worlds-request@sgi.com. In the message body type:
The VAG has made a Request for Proposal (RFP) for VRML 2.0 candidates. The RFP has set up certain requirements for submission of proposals. Moving Worlds satisfies these requirements as follows.
Moving Worlds is a collaboration of many individuals and organizations. Organizations involved are:
A reference implementation is being created by Silicon Graphics.
See the examples section of the proposal.
Moving Worlds has grown out of the VRML 1.0 specification. It has been influenced by the VRBS work at SDSC as well as work at Silicon Graphics in the Inventor and Performer products. The initial proposal as it appears today is an integration of work done by Silicon Graphics, WorldMaker and Sony. Every member of the VAG has had a hand in various pieces and input from all over the VRML world has been incorporated. Gavin Bell has headed the effort at SGI to produce the final form of the proposal as it appears here.
Moving Worlds is free of any legal restrictions.
Coming Soon! A Moving Worlds parser for Windows 95 and IRIX. This parser will provide functionality similar to Qvlib for VRML 1.0.
It will contain source ready to be compiled for Windows 95 and SGI IRIX.
It is being supplied, royalty free, by Silicon Graphics, Inc.
A sample implementation of the Moving Worlds proposal will also be available from one or more of the companies listed above soon after final submission of the proposal.
March 5, 1996
This overview provides a brief high-level summary of the Moving Worlds proposal for the VRML 2.0 specification. (The full proposed spec is available at http://webspace.sgi.com/moving-worlds/spec/spec.main.html.) The purposes of the overview are:
The overview consists of two subpages:
This overview assumes that readers are at least vaguely familiar with VRML 1.0. If you're not, read the introduction to the official VRML 1.0 spec. Note that Moving Worlds includes some changes to VRML 1.0 concepts and names, so although you should understand the basic idea of what VRML is about, you shouldn't hold on too strongly to details and definitions from 1.0 as you read the Moving Worlds proposal.
January 31, 1996
VRML 1.0 provided a means of creating and viewing static 3D worlds; VRML 2.0 will provide much more. The overarching goal of the Moving Worlds proposal for VRML 2.0 is to provide a richer, more exciting, more interactive user experience than is possible within the static boundaries of VRML 1.0. The secondary goals of the proposal are to provide a solid foundation that future VRML expansion can grow out of, and to keep things as simple and as fast as possible -- for everyone from browser developers to world designers to end users -- given the other goals.
Moving Worlds provides these extensions and enhancements to VRML 1.0:
Each section of this summary contains links to relevant portions of the full spec.
You can add realism to the static geometry of your world using new features of Moving Worlds:
New nodes allow you to create ground-and-sky backdrops to scenes, add distant mountains and clouds, and dim distant objects with fog. Another new node lets you easily create irregular terrain instead of using flat planes for ground surfaces.
Moving Worlds provides sound-generating nodes to further enhance realism -- you can put crickets, breaking glass, ringing telephones, or any other sound into a scene.
If you're writing a browser, you'll be happy to see that optimizing and parsing files are easier than in VRML 1.0, thanks to a new simplified scene graph structure.
No more moving like a ghost through cold, dead worlds: now you can directly interact with objects and creatures you encounter. New sensor nodes set off events when you move in certain areas of a world and when you click certain objects. They even let you drag objects or controls from one place to another. Another kind of sensor keeps track of the passage of time, providing a basis for everything from alarm clocks to repetitive animations.
And no more walking through walls. Collision detection ensures that solid objects react like solid objects; you bounce off them (or simply stop moving) when you run into them. Terrain following allows you to travel up and down steps or ramps.
Moving Worlds wouldn't be able to move without the new Script nodes. Using Scripts, you can not only animate creatures and objects in a world, but give them a semblance of intelligence. Animated dogs can fetch newspapers or frisbees; clock hands can move; birds can fly; robots can juggle.
These effects are achieved by means of events; a script takes input from sensors and generates events based on that input which can change other nodes in the world. Events are passed around among nodes by way of special statements called routes.
Have an idea for a new kind of geometry node that you want everyone to be able to use? Got a nifty script that you want to turn into part of the next version of VRML? In Moving Worlds, you can encapsulate a group of nodes together as a new node type, a prototype, and then make that node type available to anyone who wants to use it. You can then create instances of the new type, each with different field values -- for instance, you could create a Robot prototype with a robotColor field, and then create as many individual different-colored Robot nodes as you like.
So how does all this fit together? Here's a look at possibilities for implementing
a fully-interactive demo world called Gone Fishing.
In Gone Fishing, you start out hanging in space near a floating worldlet.
If you wanted a more earthbound starting situation, you could (for instance)
make the worldlet an island in the sea, using a Background node to show
shaded water and sky meeting at the horizon as well as distant unmoving
geometry like mountains. You could also add a haze in the distance using the fog parameters in a
Fog node.
As you approach the little world, you can see two neon signs blinking on
and off to attract you to a building. Each of those signs consists of two
pieces of geometry under a Switch node. A TimeSensor generates time events which a Script node picks up and processes;
the Script then sends other events to the Switch node telling it which of
its children should be active. All events are sent from node to node by
way of ROUTE statements.
As you approach the building -- a domed aquarium on a raised platform --
you notice that the entry portals are closed. There appears to be no way
in, until you click the front portal; it immediately slides open with a
motion like a camera's iris. That portal is attached to a ClickSensor that
detects your click; the sensor tells a Script node that you've clicked, and the Script animates
the opening portal, moving the geometry for each piece of the portal a certain
amount at a time. The script writer only had to specify certain key frames
of the animation; interpolator nodes generate intermediate values to provide
smooth animation between the key frames. The door, by the way, is set up for collision detection using a Collision
node, so that without clicking to open it you'd never be able to get in.
You enter the aquarium and a light turns on. A BoxProximitySensor node inside the room noticed you coming in and sent an event to, yes, another Script node, which told the light to turn on. The sensor, script, and light can also easily be set up to darken the room when you leave.
Inside the aquarium, you can see and hear bubbles drifting up from the floor.
The bubbles are moved by another Script; the bubbling sound is created by
a PointSound node. As you move further into the building and closer to the bubbles, the bubbling
sound gets louder.
Besides the bubbles, which always move predictably upward, three fish swim through the space inside the building. The fish could all be based on a single Fish node type, defined in this file by a PROTO statement as a collection of geometry, appearance, and behavior; to create new kinds of fish, the world builder could just plug in new geometry or behavior.
Proximity sensors aren't just for turning lights on and off; they can be used by moving creatures as well. For example, the fish could be programmed (using a similar BoxProximitySensor/Script/ROUTE combination to the one described above) to avoid you by swimming away whenever you got too close. Even that behavior wouldn't save them from users who don't follow directions, though:
Despite (or maybe because of) the warning sign on the wall, most users "touch"
one or more of the swimming fish by clicking them. Each fish behaves differently when touched; one of them swims for the door,
one goes belly-up. These behaviors are yet again controlled by Script nodes.
To further expand Gone Fishing, a world designer might allow users to "pick up" the fish and move them from place to place. This could be accomplished with a PlaneSensor node, which translates a user's click-and-drag motion into translations within the scene. Other additions -- sharks that eat fish, tunnels for the fish to swim through, a kitchen to cook fish dinners in, and so on -- are limited only by the designer's imagination.
Gone Fishing is just one example of the sort of rich, interactive world you can build with Moving Worlds. For details of the new nodes and file structure, see the conceptual section of the Moving Worlds proposed specification.
March 5, 1996
This document provides a very brief list of the changes to the set of predefined node types for Moving Worlds. It briefly describes all the newly added nodes, summarizes the changes to VRML 1.0 nodes, and lists the VRML 1.0 nodes that have been deleted in Moving Worlds. (For fuller descriptions of each node type, click the type name to link to the relevant portion of the Moving Worlds specification proposal.) Finally, this document briefly describes the new field types in Moving Worlds.
The new node types are listed by category, using the same categorization used by the Moving Worlds specification:
In place of the old Info node type, Moving Worlds provides several new node types to give specific information about the scene to the browser:
Almost all node types have been changed in one way or another -- if nothing else, most can now send and receive simple events. The most far-reaching changes, however, are in the new approaches to grouping nodes: in particular, Separators have been replaced by Transforms, which incorporate the fields of the now-defunct Transform node, and Groups no longer allow state to leak. The other extensive changes are in the structure of geometry-related nodes (which now occur only as fields in a Shape node). See the section of the spec titled "Structuring the Scene Graph" for details.
The following VRML 1.0 node types have been removed from Moving Worlds:
In addition to all of the other changes, Moving Worlds introduces a couple of new field types:
Last modified: March 8, 1996. This document can be found at http://webspace.sgi.com/moving-worlds/spec/spec.main.html
This document describes the complete specification for VRML 2.0. It contains the following sections:
Return to the full Moving Worlds proposal
March 5, 1996
This section describes key concepts related to the use of VRML, including how nodes are combined into scene graphs, how nodes receive and generate events, how to create node types using prototypes, how to add node types to VRML and export them for use by others, and how to incorporate programmatic scripts into a VRML file.
This subdocument includes the following sections:
For easy identification of VRML files, every VRML 2.0 file must begin with the characters:
#VRML V2.0 utf8
The identifier utf8 allows for international characters to be displayed in VRML using the UTF-8 encoding of the ISO 10646 standard. Unicode is an alternate encoding of ISO 10646. UTF-8 is explained under the Text node.
Any characters after these on the same line are ignored. The line is terminated by either the ASCII newline or carriage-return characters.
The # character begins a comment; all characters until the next newline or carriage return are ignored. The only exception to this is within double-quoted SFString and MFString fields, where the # character will be part of the string.
Note: Comments and whitespace may not be preserved; in particular, a VRML document server may strip comments and extra whitespace from a VRML file before transmitting it. WorldInfo nodes should be used for persistent information such as copyrights or author information. To extend the set of existing nodes in VRML 2.0, use prototypes or external prototypes rather than named information nodes.
Blanks, tabs, newlines and carriage returns are whitespace characters wherever they appear outside of string fields. One or more whitespace characters separate the syntactical entities in VRML files, where necessary.
After the required header, a VRML file can contain the following:
Field names start with lowercase letters. Node types start with uppercase. The remainder of the characters may be any printable ASCII characters (0x21-0x7E) except curly braces {}, square brackets [], single ' or double " quotes, sharp #, backslash \\ plus +, period . or ampersand &.
Node names (specified using the DEF keyword; see the "Instancing" section of this document for details) must not begin with a digit, but they may begin with and contain any UTF8 character except those below 0x21 (control characters and white space), and the characters {} [] ' " # \\ + . and &.
VRML is case-sensitive; "Sphere" is different from "sphere" and "BEGIN" is different from "begin."
A URL (Universal Resource Locator) specifies a file located on a particular server and accessed through a specified protocol. A URN (Universal Resource Name) provides a more persistent way to refer to data than is provided by a URL. The exact definition of a URN is currently under debate. See the discussion at http://www.w3.org/hypertext/WWW/Addressing/Addressing.html for further details.
All fields in VRML 2.0 that have URLs are of type MFString. The strings in such a field indicate multiple places to look for files, in decreasing order of preference. If the browser can't locate the first file or doesn't know how to deal with the URL or URN given as the first file, it can try the second location, and so on.
VRML 2.0 browsers are not required to support URNs. If they do not support URNs, they should ignore any URNs that appear in MFString fields along with URLs. URN support is specified in a separate document at http://earth.path.net/mitra/papers/vrml-urn.html, which may undergo minor revisions to keep it in line with parallel work happening at the IETF.
Relative URLs are handled as described in IETF RFC 1808, "Relative Uniform Resource Locators."
The file extension for VMRL files is .wrl (for world).
The MIME type for VRML files is defined as follows:
x-world/x-vrml
The MIME major type for 3D world descriptions is x-world. The MIME minor type for VRML documents is x-vrml. Other 3D world descriptions, such as oogl for The Geometry Center's Object-Oriented Geometry Language, or iv, for SGI's Open Inventor ASCII format, can be supported by using different MIME minor types.
It is anticipated that the official type will change to "model/vrml". At this time, servers should present files as being of type x-world/x-vrml. Browsers should recognise both x-world/x-vrml and model/vrml.
IETF work-in-progress on this subject can be found in "The Model Primary Content Type for Multipurpose Internet Mail Extensions."
At the highest level of abstraction, VRML is just a file format for describing objects. Theoretically, the objects can contain anything--3D geometry, MIDI data, JPEG images, and so on. VRML defines a set of objects useful for doing 3D graphics. These objects are called nodes. Nodes contain data, which is stored in fields.
VRML defines several different classes of nodes. Most of the nodes can be classified into one of two categories; grouping nodes or leaf nodes. Grouping nodes gather other nodes together, allowing collections of nodes (specified in a grouping-node field called children) to be treated as a single object. Some grouping nodes also control which of their children are drawn.
Leaf nodes may not have children. Nodes that are considered leaf nodes include shapes, lights, viewpoints, sounds, scripts, sensors, interpolators, and nodes that provide information to the browser.
Shape nodes contain two kinds of additional information: geometry and appearance. For purposes of discussion, this specification uses a third node category, subsidiary nodes, for nodes that are always used within fields of other nodes and cannot be used alone. These nodes include geometry (for example, Cone and Cube), geometric property (for example, Coordinate3 and Normal), appearance (Appearance) and appearance property nodes (for example, Material and Texture2).
A node has the following characteristics:
exposedField foo
is equivalent to the declaration:
field foo eventIn set_foo eventOut foo_changed
The syntax for representing these pieces of information is as follows:
nodetype { fields }
Only the node type and braces are required; nodes may or may not have fields.
For example, this file contains a simple scene defining a view of a red cone and a blue sphere, lit by a directional light:
#VRML V2.0 utf8 Transform { children [ DirectionalLight { direction 0 0 -1 # Light shining into scene }, Transform { # The red sphere translation 3 0 1 children [ Shape { geometry Sphere {radius 2.3} appearance Appearance [ material Material {diffuseColor 1 0 0} ] # Red } ] }, Transform { # The blue cube translation -2.4 .2 1 rotation 0 1 1 .9 children [ Shape { geometry Cube {} appearance Appearance [ material Material {diffuseColor 0 0 1} ] # Blue } ] } ] }
This section describes the general scene graph hierarchy, how to reuse nodes within a file, coordinate systems and transformations in VRML files, and the general model for viewing and interaction within a VRML world.
A scene graph consists of grouping nodes and leaf nodes. Grouping nodes, such as Transform, LOD, and Switch, can have child nodes. These children can be other grouping nodes or leaf nodes, such as shapes, browser information nodes, lights, cameras, and sounds. Appearance, appearance properties, geometry, and geometric properties are contained within Shape nodes.
Transformations are stored within Transform nodes. Each Transform node defines a coordinate space for its children. This coordinate space is relative to the parent (Transform) node's coordinate space--that is, transformation accumulate down the scene graph hierarchy.
A node may be referenced in a VRML file multiple times. This is called instancing (using the same instance of a node multiple times; called "aliasing" or "multiple references" by other systems) and is accomplished by using the DEF and USE keywords.
The DEF keyword gives a node a name and creates an instance of the node. The USE keyword indicates that a previously named node should be used again. If several nodes were given the same name, then the last DEF encountered during parsing "wins." DEF/USE is limited to a single file; EXTERNPROTO/PROTO must be used to refer to a node type that is defined in another file. For example, if a node is defined inside a file referenced by a WWWInline node, the file containing the WWWInline node cannot USE that node.
Rendering the following scene results in three spheres being drawn. Both of the spheres are named "Joe"; the second (smaller) sphere is drawn twice, on either side of the first (larger) sphere:
#VRML V2.0 utf8 Transform { children [ DEF Joe Sphere { }, Transform { translation 2 0 0 children [ DEF Joe Sphere { radius .2 } ] }, Transform { translation -2 0 0 children [ USE Joe # radius .2 sphere will be used here; most recent one defined ] } ] }
Tools that create VRML files may need to modify the user-defined node names to ensure that a multiply instanced node with the same name as some other node will be read correctly. The recommended way of doing this is to append an underscore followed by an integer to the user-defined name. Such tools should automatically remove these automatically generated suffixes when VRML files are read back into the tool (leaving only the user-defined names).
Similarly, if an un-named node is multiply instanced, tools will have to automatically generate a name to correctly write the VRML file. The recommended form for such names is just an underscore followed by an integer.
VRML uses a Cartesian, right-handed, 3-dimensional coordinate system. By default, objects are projected onto a 2-dimensional display device by projecting them in the direction of the positive Z axis, with the positive X axis to the right and the positive Y axis up. A modeling transformation can be used to alter this default projection.
The standard unit for lengths and distances is meters. The standard unit for angles is radians.
VRML scenes may contain an arbitrary number of local (or object-space) coordinate systems, defined by the transformation fields of the Transform node. These fields are translation, rotation, scale, scaleOrientation, and center.
Given a vertex V and a series of transformations such as:
Transform { translation T rotation R scale S children [ Shape { geometry[ PointSet { ... }] } ] }
the vertex is transformed into vertex V'
in world-space by first scaling, then rotating, and finally translating.
In matrix-transformation notation, thinking of T, R, and S as the equivalent
transformation matrices,
V' = T·R·S·V
(if you think of vertices as column vectors)
or
V' = V·S·R·T
(if you think of vertices as row vectors).
Conceptually, VRML also has a world coordinate system. The various local coordinate transformations map objects into the world coordinate system, which is where the scene is assembled. Transformations accumulate downward through the scene graph hierarchy, with each Transform inheriting the transformations of its parents. (Note however, that this series of transformations takes effect from the leaf nodes up through the hierarchy. The local transformations closest to the Shape object take effect first, followed in turn by each successive transformation upward in the hierarchy.)
This specification assumes that there is a user viewing and interacting with the VRML world. It is expected that a future extension to this specification will provide mechanisms for creating multi-participant worlds. The viewing and interaction model that should be used for the single-participant case is described here.
The world creator may place any number of viewpoints in the world -- interesting places from which the user might wish to view the world. Each viewpoint is described by a Viewpoint node. Viewpoints exist in a particular coordinate system, and either the viewpoint or the coordinate system may be animated.
It is expected that browsers will support user-interface mechanisms by which users may "teleport" themselves from one viewpoint to another, and scripting-language mechanisms by which a viewer can be bound to a viewpoint which can then be animated. If a user teleports to a viewpoint that is moving (one of its parent coordinate systems is being animated), then the user should move along with that viewpoint.
The browser may provide a user interface that allows the user to change his or her viewing position or orientation, which will also change the currently bound viewpoint.
The browser controls the passage of time in a world by causing TimeSensors to generate events as time passes. Specialized browsers or authoring applications may cause time to pass more quickly or slowly than in the real world, but typically the times generated by TimeSensors will roughly correspond to "real" time.
A world's creator must make no assumptions about how often a TimeSensor will generate events but can safely assume that each time event generated will be greater than any previous time event.
Typically, a TimeSensor affecting a visible (or otherwise perceptible) portion of the world will generate events once per "frame," where a "frame" is a single rendering of the world or one time-step in a simulation.
Most nodes can receive events, which have names and types corresponding to their fields, with the effect that the corresponding field is changed to the value of the event received. For example, the Transform node can receive set_translation events (of type SFVec3f) that change the Transform's translation field (it may also receive set_rotation events, set_scale events, and so on).
Nodes can also generate events that have names and types corresponding to their fields when those fields are changed. For example, the Transform node generates a translation_changed event when its translation field changes.
The connection between the node generating the event and the node receiving the event is called a route. A node that produces events of a given name (and a given type) can be routed to a node that receives events of the same type using the following syntax:
ROUTE NodeName.eventOutName TO NodeName.eventInName
Routes are not nodes; ROUTE is merely a syntactic construct for establishing event paths between nodes. ROUTE statements may appear at either the top-level of a .wrl file or prototype implementation, or may appear inside a node wherever fields may appear.
The types of the eventIn and the eventOut must match exactly; it is illegal to ROUTE from an SFFloat to an SFInt32 or from an SFFloat to an MFFloat.
Routes may be established only from eventOuts to eventIns. Given two exposedFields of a node "field1" and "field2", the following is illegal:
ROUTE node.field1 TO node.field2 # ILLEGAL
Instead the corresponding eventIns/eventOuts must be used:
ROUTE node.field1_changed TO node.set_field2 # OR: ROUTE node.field2_changed TO node.set_field1
Sensor nodes generate events. Geometric sensor nodes (BoxProximitySensor, ClickSensor, CylinderSensor, DiskSensor, PlaneSensor, and SphereSensor) generate events based on user actions, such as a mouse click or navigating close to a particular object. TimeSensor nodes generate events at regular intervals, as time passes.
Prototyping is a mechanism that allows the set of node types to be extended from within a VRML file. It allows the encapsulation and parameterization of geometry, behaviors, or both.
A prototype definition consists of the following:
Square brackets enclose the list of events and fields, and braces enclose the definition itself:
PROTO prototypename [ eventIn eventtypename name eventOut eventtypename name exposedField fieldtypename name defaultValue field fieldtypename name defaultValue ... ] { Scene graph (nodes, prototypes, and routes, containing IS statements) }
A prototype is not a node; it merely defines a prototype (named prototypename) that can be instantiated later in the same file as if it were a built-in node. The implementation of the prototype is contained in the scene graph rooted by node. That node may be followed by Script and/or ROUTE declarations, as necessary to implement the prototype.
PROTO and EXTERNPROTO statements may appear anywhere ROUTE statements may appear-- at either the top-level of a .wrl file or prototype implementation, or inside a node wherever fields may appear.
The eventIn and eventOut declarations export events from the scene graph rooted by node. Specifying the type of each event in the prototype is intended to prevent errors when the implementation of prototypes is changed and to provide consistency with external prototypes.
Events generated or received by nodes in the prototype's implementation are associated with the prototype using the keyword IS. For example, the following statement exposes a Transform node's built-in set_translation event by giving it a new name (set_position) in the prototype interface:
Transform { set_translation IS set_position }
Fields hold the persistent state of VRML objects. Allowing a prototype to export fields allows the initial state of a prototyped object to be specified when an instance of the prototype is created. The fields of the prototype are associated with fields in the implementation using the IS keyword. For example:
Transform { translation IS position }
IS statements may appear inside nodes wherever fields may appear. Specifying an IS statement for a node that is not part of a prototype's implementation is an error. It is an error for an IS statement to refer to something that is not part of the prototype's interface declaration. It is an error if the type of the field or event being exposed does not match the type declared in the prototype's interface declaration.
A prototype is instantiated as if typename were a built-in node. For example, a simple chair with variable colors for the leg and seat might be prototyped as:
PROTO TwoColorChair [ field MFColor legColor .8 .4 .7 field MFColor seatColor .6 .6 .1 ] { Transform { children [ Transform { # chair seat children [ Shape { appearance Appearance { material Material { diffuseColor IS seatColor } } geometry Cube { ... } } ] }, Transform { # chair leg translation ... children [ Shape { appearance Appearance { material Material { diffuseColor IS legColor } } geometry Cylinder { ... } } ] } ] # End of root Transform's children } # End of root Transform } # End of prototype
The prototype is now defined. Although it contains a number of nodes, only the legColor and seatColor fields are public. Instead of using the default legColor and seatColor, this instance of the chair has red legs and a green seat:
TwoColorChair { legColor 1 0 0 seatColor 0 1 0 }
Prototype instances may be named using DEF and may be multiply instanced using USE.
A prototype instance can be used in the scene graph wherever its root node can be used. For example, a prototype defined as:
PROTO MyObject [ ... ] { Transform { ... } }
can be instantiated wherever a Transform can be used, since the root node of this prototype's implementation is a Transform node.
A prototype's implementation defines a DEF/USE name scope separate from the rest of the scene; nodes DEF'ed inside the prototype implementation may not be USE'ed outside of the prototype implementation, and nodes DEF'ed outside the prototype implementation may not be USE'ed inside the prototype implementation.
Prototype definitions appearing inside a prototype implementation are local to then enclosing prototype. For example, given the following:
PROTO one [ ] { PROTO two [ ] { ... } ... two { } #Instantiation inside "one": OK } two { } # ERROR: "two" may only be instantiated inside "one".
The second instantiation of "two" is illegal. IS statements inside such a nested prototype's implementation may not refer to the enclosing prototype's interface.
The syntax for defining prototypes in external files is as follows:
EXTERNPROTO prototypename [ eventIn eventtypename name eventOut eventtypename name field fieldtypename ... ] "URL" or [ "URL", "URL", ... ]
The external prototype is then given the name prototypename in this file's scope. It is an error if the eventIn/eventOut declaration in the EXTERNPROTO is not a subset of the eventIn/eventOut declarations specified in the PROTO referred to by the URL. If multiple URLs are specified, the first one found should be used.
Unlike a prototype, an external prototype does not contain an inline implementation of the node type. Instead, the prototype implementation is fetched from a URL or URN. The other difference between a prototype and an external prototype is that external prototypes do not contain default values for fields. The external prototype points to a file that contains the prototype implementation, and this file contains the default values.
To allow the creation of libraries of small, re-usable PROTO definitions, browsers should recognize EXTERNPROTO URLs that end with "#name" to mean the prototype definition of "name" in the given file. For example, a library of standard materials might be stored in a file called "materials.wrl" that looks like:
#VRML V2.0 utf8 PROTO Gold [] { Material { ... appropriate fields ... } } PROTO Silver [] { Material { ... } } etc.
A material from this library could be used as follows:
#VRML V2.0 utf8 EXTERNPROTO Gold [] "http://.../materials.wrl#Gold" ... Shape { appearance Appearance { material Gold {}} geometry ... }
The advantage is that only one http fetch needs to be done if several things are used from the library; the disadvantage is that the entire library will be transmitted across the network even if only one thing is used from it.
The set of built-in VRML nodes can be extended using either prototypes or external prototypes. External prototypes provide a way to extend VRML in a manner that all browsers will understand. If a new node type is defined as an external prototype, other browsers can parse it and understand what it looks like, or they can ignore it. An external prototype uses the URL syntax to refer to an internal or built-in implementation of a node. For example, suppose your system has a Torus geometry node. This node can be exported to other systems using an external prototype:
EXTERNPROTO Torus [ field SFFloat bigRadius field SFFloat smallRadius ] ["urn:yourdomain:Torus", "http://machine/directory/protofile" ]
The browser can recognize the URN and look for its own internal implementation of the Torus node. If it does not recognize the URN, it goes to the next URL and searches for the specified prototype file. In this case, if the file is not found, it ignores the Torus. If more URLs are listed, the browser tries each one until it succeeds in locating an implementation for the node or it reaches the end of the list.
Check the "File Syntax and Structure" section of this standard for the rules on valid characters in names.
To avoid namespace collisions with nodes defined by other people, any of the following conventions should be followed.
Logic is often necessary to decide what effect an event should have on the scene -- "if the vault is currently closed AND the correct combination is entered, THEN open the vault." These kinds of decisions are expressed as Script nodes that take in events, process them, and generate other events. A Script node can also keep track of information between invocations, "remembering" what its internal state is over time.
The event processing is done by a program contained in (or referenced by) the Script node's behavior field. This program can be written in any programming language that the browser supports.
A Script node is activated when it receives an event. At that point the browser executes the program in the Script node's behavior field (passing the program to an external interpreter if necessary). The program can perform a wide variety of actions: sending out events (and thereby changing the scene), performing calculations, communicating with servers elsewhere on the Internet, and so on.
Two of the most common uses for scripts will probably be animation (using interpolators to smoothly move objects from one position to another) and network operations (connecting to servers to allow multi-user interaction).
Scripts can be written in a variety of languages, including Java, C, and Perl. Moving Worlds does not require browsers to support any particular language. See appendices to this specification for bindings to Java and C.
Every time a Script node receives one or more eventIn's, they are delivered to the associated script as a queue of events. In its simplest implementation the script consists of one function or method for each eventIn in the Script node. For each eventIn in the queue, the appropriate method is executed, ordered by the time of receipt of the eventIn. These methods are passed the data from the eventIn and an SFTime of its time stamp, both of which are const parameters.
The author can also define an eventsProcessed() method, executed after all eventIn methods are called, to perform any post-processing necessary. For instance, the eventIn methods can simply collect data, allowing eventsProcessed() to process all the data at once, preventing duplication of work.
Actual event processing is handled by the processEvents() method or function. This is passed an array of event structures, each containing the name of the eventIn, its data, and the time stamp when it was received. A default implementation of this method is provided with the functionality described in the previous paragraphs. This may be overridden however, giving the author complete control over event processing.
After all events in the queue are handled, either on return from eventsProcessed() or processEvents(), values stored during script execution as eventOuts are sent, one for each eventOut that was set at least once during script execution. At most one message is sent for each eventOut value.
In languages that allow multiple threads, such as Java, you can use the standard language mechanisms to start new threads. When the browser disposes of the Script node (as, for instance, when the current world is unloaded), the script's shutdown() method will be called to allow the script to gracefully terminate any threads it may have created.
If you want to keep static data in a script (that is, to retain values from one invocation of the script to the next), you can use instance variables--local variables within the script, declared private. However, the value of such variables can't be relied on if the script is unloaded from the browser's memory; to guarantee that values will be retained, they must be kept in fields of the Script node.
The API provides a data type in the scripting language for every field type in VRML. For instance, the Java bindings contain a class called SFFloat, which defines methods for getting and setting the value of variables of type SFFloat. A script can get and set the value of its own fields using these data types and methods.
The API also provides a way to access other nodes in the scene. It allows getting the value of any exposed field of any node that the Script has access to.
The API provides ways for scripts to get and set global information associated with the VRML browser, such as the URL of the current world. Here are descriptions of the functions/methods that the browser API supports. The syntax given is the Java syntax.
public static String getName(); public static String getVersion();
The getName() and getVersion() methods get the "name" and "version" of the browser currently in use. These values are defined by the browser writer, and identify the browser in some (unspecified) way. They are not guaranteed to be unique or to adhere to any particular format, and are for information only. If the information is unavailable these methods return empty strings.
public static float getCurrentSpeed();
The getCurrentSpeed() method returns the speed at which the viewpoint is currently moving, in meters per second. If speed of motion is not meaningful in the current navigation type, or if the speed cannot be determined for some other reason, 0.0 is returned.
public static float getCurrentFrameRate();
The getCurrentFrameRate() method returns the current frame rate in frames per second. The way in which this is measured and whether or not it is supported at all is browser dependent. If frame rate is not supported, or can't be determined, 100.0 is returned.
public static String getWorldURL(); public static void loadWorld(String [] url);
The getWorldURL() method returns the URL for the root of the currently loaded world. loadWorld() loads one of the URLs in the passed string and replaces the current scene root with the VRML file loaded. The browser first attempts to load the first URL in the list; if that fails, it tries the next one, and so on until a valid URL is found or the end of list is reached. If a URL cannot be loaded, some browser-specific mechanism is used to notify the user. Implementations may either block on a loadWorld() until the new URL finishes loading, or may return immediately and at some later time (when the load operation has finished) replace the current scene with the new one.
public static Node createVrmlFromURL( String[] url ); public static Node createVrmlFromString( String vrmlSyntax );
The createVrmlFromString() method takes a string consisting of a VRML scene description and returns the root node of the corresponding VRML scene. The createVRMFromURL() asks the browser to load a VRML scene description from the given URL or URLs, returning the root node of the corresponding VRML scene.
public void addRoute(Node fromNode, String fromEventOut, Node toNode, String toEventIn); public void deleteRoute(Node fromNode, String fromEventOut, Node toNode, String toEventIn);
These methods respectively add and delete a route between the given event names for the given nodes.
public void bindBackground(Node background); public void unbindBackground(); public boolean isBackgroundBound(Node background);
bindBackground() allows a script to specify which Background node should be used to provide a backdrop for the scene. Once a Background node has been bound, isBackgroundBound() indicates whether a given Background node is the currently bound one, and unbindBackground() restores the Background node in use before the previous bind. If unbindBackground() is called when nothing is bound, nothing happens. Changing the fields of a currently bound Background node changes the currently displayed background.
public void bindNavigationInfo(Node navigationInfo); public void unbindNavigationInfo(); public boolean isNavigationInfoBound(Node navigationInfo);
bindNavigationInfo() allows a script to specify which NavigationInfo node should be used to provide hints to the browser about how to navigate through a scene. Once a NavigationInfo node has been bound, isNavigationInfoBound() indicates whether a given node is the currently bound one, and unbindNavigationInfo() restores the NavigationInfo node in use before the previous bind. If unbindNavigationInfo() is called when nothing is bound, nothing happens. A script can change the fields of a NavigationInfo node using events and routes. Changing the fields of a currently bound NavigationInfo node changes the associated parameters used by the browser.
public void bindViewpoint(Node viewpoint); public void unbindViewpoint(); public boolean isViewpointBound(Node viewpoint);
In some cases, a script may need to manipulate the user's current view of the scene. For instance, if the user enters a vehicle (such as a roller coaster or elevator), the vehicle's motion should also be applied to the viewer. bindViewpoint() provides a way to bind the viewer to a given Viewpoint node. This binding doesn't itself change the viewer location or orientation; instead, it changes the fields of the given Viewpoint node to correspond to the current viewer location and orientation. (It also places the viewer in the coordinate space of the given Viewpoint node.) Once a Viewpoint is bound, the script can animate the transformation fields of the Transform that the Viewpoint is in (probably using an interpolator to generate values) and move the viewer through the scene.
Note that scripts should animate the Viewpoint's frame of reference (the transformation of the enclosing Transform) rather than the Viewpoint itself, in order to allow the user to move the viewer a little during transit (for instance, to let the user walk around inside the elevator while it's between floors). Fighting with the user for control of the viewer is a bad idea.
Note also that results are undefined for vehicle travel if the user is allowed to move out of the vehicle while the animation is running. This problem is best resolved by using collision detection to prevent the user leaving the vehicle while it's in motion. Another option is to turn off the browser's user interface during animation by setting the current navigation type to "none".
When the script has finished transporting the user, unbindViewpoint() releases the viewer from the influence of the currently bound Viewpoint, returning the viewer to the coordinate space of the previous viewpoint binding (or the base coordinate system of the scene if there's no previous binding). The fields of the now-unbound Viewpoint node return to the values they had before the binding.
And of course isViewpointBound() returns TRUE if the specified Viewpoint node is currently bound to the viewer (which implies that the fields of that Viewpoint node indicate the current position and orientation of the viewer). The method returns FALSE if the specified Viewpoint is not bound.
Scripts that need to use system and networking calls should use the scripting language's system and networking libraries. The VRML API doesn't provide such calls.
A Script node that decided whether or not to open a bank vault might receive vaultClosed and combinationEntered messages, produce openVault messages, and remember the correct combination and whether or not the vault is currently open. The VRML for this Script node might look like this:
DEF OpenVault Script { # Declarations of what's in this Script node: eventIn SFBool vaultClosed eventIn SFString combinationEntered eventOut SFBool openVault field SFString correctCombination "43-22-9" field SFBool currentlyOpen FALSE # Implementation of the logic: scriptType "java" behavior "java.class" }
The "java.class" file will contain a compiled version of the following Java source code:
import vrml; class VaultScript extends Script { // Declare fields private SFBool currentlyOpen = (SFBool) getField("currentlyOpen"); private SFString correctCombination = (SFString) getField("correctCombination"); // Declare eventOuts private SFBool openVault = (SFBool) getEventOut("openVault"); // Handle eventIns public void vaultClosed(ConstSFBool value, SFTime ts) { currentlyOpen.setValue(FALSE); } public void combinationEntered(ConstSFString combo, SFTime ts) { if (currentlyOpen.getValue() == FALSE && combo.getValue() == correctCombination) { currentlyOpen.setValue(TRUE); openVault.setValue(TRUE); } } }
March 7, 1996
This section provides a detailed description of each node in VRML 2.0. It is organized by functional group. Nodes within each group are listed alphabetically. (An alphabetical Index of Nodes and Fields is also available.)
Intrinsic nodes are nodes whose functionality cannot be duplicated by any combination of other nodes; they form the core functionality of VRML. The functional groups used in this section are as follows:
These nodes provide common functionality that all VRML implementations are required to support, but that can be created using one or more of the intrinsic nodes. A reference PROTO implementation is given for these nodes (note: we didn't have time before the VRML 2.0 RFP to do all implementations, for several nodes we just sketch out what the PROTO would look like).
The last item in each node description is the public interface for the node, with default values. (The syntax for the public interface is the same as that for prototypes.) For example:
DirectionalLight { exposedField SFBool on TRUE exposedField SFFloat intensity 1 exposedField SFFloat ambientIntensity 0 exposedField SFColor color 1 1 1 exposedField SFVec3f direction 0 0 -1 }
Fields that have associated implicit set_ and _changed events are labeled exposedField. For example, the on field has a set_on input event and an on_changed output event. Exposed fields may be connected using ROUTE statements, and may be read and/or written by Script nodes.
Note that this information is arranged in a slightly different manner in the file format for each node. The keywords "field" or "exposedField" and the types of the fields are not specified when instantiating a node in the file format. For example the file format for the above example is:
DirectionalLight { on TRUE intensity 1 ambientIntensity 0 color 1 1 1 direction 0 0 -1 }
The Collision grouping node specifies to a browser what objects in the scene should not be navigated through. It is useful to keep viewers from walking through walls in a building, for instance. Collision response is browser-defined. For example, when the user comes sufficiently close to an object to register as a collision, the browser may have the user bounce off the object or simply come to a stop.
The children of a Collision node are always drawn, just as the children of a simple Group are drawn. These children are the objects that are checked for collision. If desired, a proxy object can be supplied, and this proxy object will be checked for collision in place of the actual child objects (see description of the proxy field, below).
By default, collision detection is ON. The collide field in this node allows collision detection to be turned off, in which case the children of the Collision node will not be checked for collision, even though they will be drawn.
Since collision with arbitrarily complex geometry is computationally expensive, one method of increasing efficiency is to be able to define an alternate geometry that could serve as a proxy for colliding against. This collision proxy, contained in the proxy field, could be as crude as a simple bounding box or bounding sphere, or could be more sophisticated (for example, the convex hull of a polyhedron).
If the value of the collide field is FALSE, then no collision is performed with the affected geometry. If the value of the collide field is TRUE, then the proxy field defines the geometry against which collision testing is done. If the proxy value is NULL, the children of the collision node are collided against. If the proxy value is not NULL, then it contains the geometry that is used in collision computations.
If children is empty, collide is TRUE and a proxy is specified then collision detection is done against the proxy but nothing is displayed-- this is a way of colliding against "invisible" geometry.
The collision eventOut will generate an event containing the time when the path of the user through the scene intersects a geometry in this collision node against which collisions are being checked. An ideal implementation would compute the exact moment of intersection, but implementations may approximate the ideal by sampling the positions of geometries and the viewer. Refer to the NavigationInfo node for parameters that control the user's size.
Collision { exposedField SFBool collide TRUE field SFNode proxy NULL exposedField MFNode children [] eventOut SFTime collision }
A Transform is a grouping node that defines a coordinate system for its children that is relative to the coordinate systems of its parents. See also "Coordinate Systems and Transformations."
The bboxCenter and bboxSize fields may be used to specify a maximum possible bounding box for the objects inside this Transform. These are hints to the browser that it may use to optimize certain operations such as determining whether or not the Transform needs to be drawn. If the specified bounding box is smaller than the true bounding box of the Transform, results are undefined. The bounding box should be large enough to completely contain the effects of all sounds, lights and fog nodes that are children of this Transform. If the size of this Transform may change over time because its children are animating (moving), then the bounding box must also be large enough to contain all possible animations (movements). The bounding box should be only the union of the Transform's children's bounding boxes; it should not include the Transform's transformation.
The add_children event adds the nodes passed in to the Transform's children field. Any nodes passed in the add_children event that are already in the Transform's children list are ignored. The remove_children event removes the nodes passed in from the Transform's children field. Any nodes passed in the remove_children event that are not in the Transform's children list are ignored.
The translation, rotation, scale, scaleOrientation and center fields define a geometric 3D transformation consisting of (in order) a (possibly) non-uniform scale about an arbitrary point, a rotation about an arbitrary point and axis, and a translation. The Transform node:
Transform { translation T1 rotation R1 scale S scaleOrientation R2 center T2 ... }
is equivalent to the nested sequence of:
Transform { translation T1 Transform { translation T2 Transform { rotation R1 Transform { rotation R2 Transform { scale S Transform { rotation -R2 Transform { translation -T2 ... }}}}}}} Transform { field SFVec3f bboxCenter 0 0 0 field SFVec3f bboxSize 0 0 0 exposedField SFVec3f translation 0 0 0 exposedField SFRotation rotation 0 0 1 0 exposedField SFFloat scale 1 1 1 exposedField SFRotation scaleOrientation 0 0 1 0 exposedField SFVec3f center 0 0 0 exposedField MFNode children [ ] eventIn MFNode add_children eventIn MFNode remove_children }
This section describes the leaf nodes in detail and is organized into the following subsections:
The Viewpoint node defines an interesting location in a local coordinate system from which the user might wish to view the scene. Viewpoints may be animated, and Script nodes may "bind" the user to a particular viewpoint using Script API calls to the browser. A world creator can automatically move the user's view through the world by binding the user to a viewpoint and then animating that viewpoint.
The position and orientation fields of the Viewpoint node specify relative locations in the local coordinate system. Position is relative to the coordinate system's origin (0,0,0), while orientation specifies a rotation relative to the default orientation; the default orientation has the user looking down the -Z axis with +X to the right and +Y straight up. Note that the single orientation rotation (which is a rotation about an arbitrary axis) is sufficient to completely specify any combination of view direction and "up" vector.
The fieldOfView field specifies a preferred field of view from this viewpoint, in radians. A smaller field of view corresponds to a telephoto lens on a camera; a larger field of view corresponds to a wide-angle lens on a camera. The field of view should be greater than zero and smaller than PI; the default value corresponds to a 45 degree field of view. fieldOfView is a hint to the browser and may be ignored. A browser rendering the scene into a rectangular window will ideally scale things such that a solid angle of fieldOfView from the viewpoint in the view direction will be completely visible in the window.
A viewpoint can be placed in a VRML world to specify the initial location of the viewer when that world is entered. Browsers should recognize the URL syntax "..../scene.wrl#ViewpointName" as specifying that the user's initial view when entering the "scene.wrl" world should be the first viewpoint in file "scene.wrl" that appears as "DEF ViewpointName Viewpoint { ... }".
The description field of the viewpoint may be used by browsers that provide a way for users to travel between viewpoints. The description should be kept brief, since browsers will typically display lists of viewpoints as entries in a pull-down menu, etc.
Viewpoint { exposedField SFVec3f position 0 0 0 exposedField SFRotation orientation 0 0 1 0 exposedField SFFloat fieldOfView 0.785398 field SFString description "" }
This grouping includes nodes that light the scene (DirectionalLight, PointLight, and SpotLight) as well as nodes that affect the lighting within the scene, such as the Fog node.
Lighting is additive, so objects are illuminated by the sum of all of the direct and ambient illumination impinging upon them. Ambient illumination results from scattering and reflection of direct illumination, so logically ambient light is tied to lights in the scene, with each having an ambientIntensity. The contribution of a light to the overall ambient lighting is computed as ambientLight[i] = on ? (intensity * ambientIntensity * color[i]) : 0, i=0,1,2. This allows the light's overall brightness, both direct and ambient, to be controlled by changing the intensity. Renderers that do not support per-light ambient illumination may simply use this information to set the ambient lighting parameters when the world is loaded.
The DirectionalLight node defines a directional light source that illuminates along rays parallel to a given 3-dimensional vector.
A directional light source illuminates only the objects in its enclosing Group. The light illuminates everything within this coordinate system, including the objects that precede it in the scene graph--for example:
Transform { children [ Shape { ... }, DirectionalLight { .... } # lights the preceding shape ] }
Some low-end renderers do not support the concept of per-object lighting. This means that placing DirectionalLights inside local coordinate systems, which implies lighting only the objects beneath the Transform with that light, is not supported in all systems. For the broadest compatibility, lights should be placed at outermost scope.
DirectionalLight { exposedField SFBool on TRUE exposedField SFFloat intensity 1 exposedField SFFloat ambientIntensity 0 exposedField SFColor color 1 1 1 exposedField SFVec3f direction 0 0 -1 }
The Fog node defines an axis-aligned ellipsoid of dense, colored atmosphere. The size field defines the size of this foggy region in the local coordinate system. The maxVisibility field specifies the distance at which an object is completely obscured by the fog. This distance is specified in the local coordinate system (by default, in meters). The color field may be used to simulate different kinds of atmospheric effects by changing the fog's color.
An ideal implementation of fog would compute exactly how much attenuation occurs between the viewer and every object in the world and render the scene appropriately. However, implementations are free to approximate this ideal behavior, perhaps by computing the intersection of the viewing direction vector with any foggy regions and computing overall fogging parameters each time the scene is rendered.
Fog { exposedField SFVec3f size 0 0 0 exposedField SFFloat maxVisibility 1 exposedField SFColor color 1 1 1 }
The PointLight node defines a point light source at a fixed 3D location. A point source illuminates equally in all directions; that is, it is omni-directional.
A PointLight illuminates everything within radius of its location. A PointLight's illumination falls off with distance as specified by three attenuation coefficients. The attenuation factor is 1/(attenuation[0] + attenuation[1]*r + attenuation[2]*r^2), where r is the distance of the light to the surface being illuminated. The default is no attenuation. Renderers that do not support a full attenuation model may approximate as necessary.
PointLight { exposedField SFBool on TRUE exposedField SFFloat intensity 1 exposedField SFFloat ambientIntensity 0 exposedField SFColor color 1 1 1 exposedField SFVec3f location 0 0 0 exposedField SFFloat radius 1 exposedField SFVec3f attenuation 1 0 0 }
The SpotLight node defines a light source that is placed at a fixed location in 3-space and illuminates in a cone along a particular direction.
The cone of light extends a maximum distance of radius from its location. The light's illumination falls off with distance as specified by three attenuation coefficients. The attenuation factor is 1/(attenuation[0] + attenuation[1]*r + attenuation[2]*r^2), where r is the distance of the light to the surface being illuminated. The default is no attenuation. Renderers that do not support a full attenuation model may approximate as necessary.
The intensity of the illumination may drop off as the ray of light diverges from the light's direction toward the edges of the cone. The angular distribution of light is controlled by the cutOffAngle, beyond which the illumination is zero, and the beamWidth, the angle at which the beam starts to fall off. Renderers that support a two cone model with linear fall off from full intensity at the inner cone to zero at the cutoff cone should use beamWidth for the inner cone angle. Renderers that attenuate using a cosine raised to a power, should use an exponent of exponent = 0.5*log(0.5)/log(cos(beamWidth)). When beamWidth >= PI/2 the illumination is uniform up to the cutoff angle, which is the default.
SpotLight { exposedField SFBool on TRUE exposedField SFFloat intensity 1 exposedField SFFloat ambientIntensity 0 exposedField SFColor color 1 1 1 exposedField SFVec3f location 0 0 0 exposedField SFVec3f direction 0 0 -1 exposedField SFFloat beamWidth 1.570796 exposedField SFFloat cutOffAngle 0.785398 exposedField SFFloat radius 1 exposedField SFVec3f attenuation 1 0 0 }
The Sound functional grouping includes the DirectedSound node.
ISSUE: What sound file formats should be required?
The DirectedSound node describes a sound which emits primarily in the direction defined by the direction vector. Where minRange and maxRange determine the extent of a PointSound, the extent of a DirectedSound is determined by four fields: minFront, minBack, maxFront, and maxBack.
Around the location of the emitter, minFront and minBack determine the extent of the ambient region in front of and behind the sound. If the location of the sound is taken as a focus of an ellipse, and the minBack and minFront values (in combination with the direction vector) as determining the two vertices, these three points describe an ellipse bounding the ambient region of the sound. Similarly, maxFront and maxBack determine the limits of audibility in front of and behind the sound; they describe a second, outer ellipse.
The inner ellipse is analogous to the sphere determined by the minRange field in the PointSound definition: within this ellipse, the sound is non-directional, with constant and maximal intensity. The outer ellipse is analogous to the sphere determined by the maxRange field in the PointSound definition and represents the limits of audibility of the sound. Between the two ellipses, the intensity drops off proportionally with distance and the sound is localized in space.
One advantage of this model is that a DirectedSound behaves as expected when approached from any angle; the intensity increases smoothly even if the emitter is approached from the back.
See the PointSound node for a description of all other fields.
DirectedSound { field MFString name [ ] field SFString description "" exposedField SFFloat intensity 1 exposedField SFVec3f location 0 0 0 exposedField SFVec3f direction 0 0 1 exposedField SFFloat minFront 10 exposedField SFFloat maxFront 10 exposedField SFFloat minBack 10 exposedField SFFloat maxBack 10 exposedField SFBool loop FALSE exposedField SFTime start 0 exposedField SFTime pause 0 }
This functional group includes only one node, the Shape node.
A Shape node has two fields: appearance and geometry. These fields, in turn, contain other nodes. The appearance field contains an Appearance node that has material, texture, and textureTransform fields (see the Appearance node). The geometry field contains a geometry node. See Subsidiary Nodes.
Shape { field SFNode appearance NULL field SFNode geometry NULL }
The following groups of nodes are used only in fields within other nodes. They cannot stand alone in the scene graph.
A Shape node contains one geometry node in its geometry field. This node can be an IndexedFaceSet, IndexedLineSet, PointSet, or Text node. A geometry node can appear only in the geometry field of a Shape node. Geometry nodes usually contain Coordinate3, Normal, and TextureCoordinate2 nodes in specified SFNode fields. All geometry nodes are specified in a local coordinate system determined by the parent(s) nodes of the geometry.
The ccw field indicates whether the vertices are ordered in a counter-clockwise direction when the shape is viewed from the outside (TRUE). If the order is clockwise or unknown, this field value is FALSE. The solid field indicates whether the shape encloses a volume (TRUE). If nothing is known about the shape, this field value is FALSE. The convex field indicates whether all faces in the shape are convex (TRUE). If nothing is known about the faces, this field value is FALSE.
These hints allow VRML implementations to optimize certain rendering features. Optimizations that may be performed include enabling backface culling and disabling two-sided lighting. For example, if an object is solid and has ordered vertices, an implementation may turn on backface culling and turn off two-sided lighting. If the object is not solid but has ordered vertices, it may turn off backface culling and turn on two-sided lighting.
The IndexedFaceSet node represents a 3D shape formed by constructing faces (polygons) from vertices listed in the coord field. The coord field must contain a Coordinate3 node. IndexedFaceSet uses the indices in its coordIndex field to specify the polygonal faces. An index of -1 indicates that the current face has ended and the next one begins. The last face may (but does not have to be) followed by a -1. If the greatest index in the coordIndex field is N, then the Coordinate3 node must contain N+1 coordinates (indexed as 0-N).
For descriptions of the coord, normal, and texCoord fields, see the Coordinate3, Normal, and TextureCoordinate2 nodes.
If the color field is not NULL then it must contain a Color node, whose colors are applied to the vertices or faces of the IndexedFaceSet as follows:
If the normal field is NULL, then the browser should automatically generate normals, using creaseAngle to determine if and how normals are smoothed across shared vertices.
If the normal field is not NULL, then it must contain a Normal node, whose normals are applied to the vertices or faces of the IndexedFaceSet in a manner exactly equivalent to that described above for applying colors to vertices/faces.
If the texCoord field is not NULL, then it must contain a TextureCoordinate2 node. The texture coordinates in that node are applied to the vertices of the IndexedFaceSet as follows:
If the texCoord field is NULL, a default texture coordinate mapping is calculated using the bounding box of the shape. The longest dimension of the bounding box defines the S coordinates, and the next longest defines the T coordinates. If two or all three dimensions of the bounding box are equal, then ties should be broken by choosing the X, Y, or Z dimension in that order of preference. The value of the S coordinate ranges from 0 to 1, from one end of the bounding box to the other. The T coordinate ranges between 0 and the ratio of the second greatest dimension of the bounding box to the greatest dimension.
See the introductory Geometry section for a description of the ccw, solid, convex, and creaseAngle fields.
IndexedFaceSet { exposedField SFNode coord NULL field MFInt32 coordIndex [ ] exposedField SFNode texCoord NULL field MFInt32 texCoordIndex [ ] exposedField SFNode color NULL field MFInt32 colorIndex [ ] field SFBool colorPerVertex TRUE exposedField SFNode normal NULL field MFInt32 normalIndex [ ] field SFBool normalPerVertex TRUE field SFBool ccw TRUE field SFBool solid TRUE field SFBool convex TRUE field SFFloat creaseAngle 0 }
This node represents a 3D shape formed by constructing polylines from vertices listed in the coord field. IndexedLineSet uses the indices in its coordIndex field to specify the polylines. An index of -1 indicates that the current polyline has ended and the next one begins. The last polyline may (but does not have to be) followed by a -1.
For descriptions of the coord field, see the Coordinate3 node.
Lines are not texture-mapped or affected by light sources.
If the color field is not NULL, it must contain a Color node, and the colors are applied to the line(s) as folows:
IndexedLineSet { exposedField SFNode coord NULL field MFInt32 coordIndex [] exposedField SFNode color NULL field MFInt32 colorIndex [] field SFBool colorPerVertex TRUE }
The PointSet node represents a set of points listed in the coord field. PointSet uses the coordinates in order. The number of points in the set is specified by the numPoints field.
Points are not texture-mapped or affected by light sources.
If the color field is not NULL, it must contain a Color node that contains at least numPoints colors. Colors are always applied to each point in order.
PointSet { exposedField SFNode coord NULL field SFInt32 numPoints 0 exposedField SFNode color NULL }
The Text node represents one or more text strings specified using the UTF-8 encoding of the ISO10646 character set (UTF-8 encoding is described below). Note that ASCII is a subset of UTF-8, so all ASCII strings are also UTF-8.
The text strings are contained in the string field. The fontStyle field contains one FontStyle node that specifies the font size, font family and style, direction of the text strings, and any specific language rendering techniques that must be used for non-English text.
The justify field determines where the text is positioned in relation to the origin (0,0,0) of the object coordinate system. The values for the justify field are "BEGIN", "MIDDLE", and "END". For a left-to-right direction , "BEGIN" would specify left-justified text, "END" would specify right-justified text, and "MIDDLE" would specify centered text. See the FontStyle node for details of text placement.
The spacing field determines the spacing between multiple text strings. The size field of the FontStyle node specifies the height (in object space units) of glyphs rendered and determines the vertical spacing of adjacent lines of text. All subsequent strings advance in either X or Y by -(size * spacing). A value of 0 for spacing causes the string to be in the same position. A value of -1 causes subsequent strings to advance in the opposite direction.
The maxExtent field limits and scales the text string if the natural length of the string is longer than the maximum extent. If the text string is shorter than the maximum extent, it is not scaled. The maximum extent is measured horizontally for horizontal text (FontStyle node: horizontal=TRUE) and vertically for vertical text (FontStyle node: horizontal=FALSE).
The width field contains an MFFloat value that specifies the width of each text string. If the string is too short, it is stretched (either by scaling the text itself or by adding space between the characters). If the string is too long, it is compressed. If a width value is missing--for example, if there are four strings but only three width values--the missing values are considered to be 0.
For both the maxExtent and width fields, specifying a value of 0 indicates to allow the string to be any width.
Textures are applied to 3D text as follows. The texture origin is at the origin of the first string, as determined by the justification. The texture is scaled equally in both S and T dimensions, with the font height representing 1 unit. S increases to the right, T increases up.
UTF-8 Character Encodings
The 2 byte (UCS-2) encoding of ISO 10646 is identical to the Unicode standard.
In order to allow standard ASCII text editors to contiue to work with most VRML files, we have chosen to support the UTF-8 encoding of ISO 10646. This encoding allows ASCII text (0x0..0x7F) to appear without any changes and encodes all characters from 0x80.. 0x7FFFFFFF into a series of six or fewer bytes.
If the most significant bit of the first character is 0, then the remaining seven bits are interpreted as an ASCII character. Otherwise, the number of leading 1 bits will indicate the number of bytes following. There is always a 0 bit between the count bits and any data.
First byte could be one of the following. The X indicates bits available to encode the character.
0XXXXXXX only one byte 0..0x7F (ASCII) 110XXXXX two bytes Maximum character value is 0x7FF 1110XXXX three bytes Maximum character value is 0xFFFF 11110XXX four bytes Maximum character value is 0x1FFFFF 111110XX five bytes Maximum character value is 0x3FFFFFF 1111110X six bytes Maximum character value is 0x7FFFFFFF
All following bytes have this format: 10XXXXXX
A two byte example. The symbol for a registered trademark is "circled R registered sign" or 174 in both ISO/Latin-1 (8859/1) and ISO 10646. In hexadecimal, it is 0xAE. In HTML, it is ®. In UTF-8 it has the following two-byte encoding: 0xC2, 0xAE.
Text { exposedField MFString string [ ] field SFNode fontStyle NULL field SFString justify "BEGIN" # "BEGIN","MIDDLE", "END" field SFFloat spacing 1.0 field SFFloat maxExtent 0.0 field MFFloat width [ ] }
Geometric properties are always contained in the corresponding SFNode fields of geometry nodes such as the IndexedFaceSet, IndexedLineSet, and PointSet nodes.
This node defines a set of RGB colors to be used in the color fields of an IndexedFaceSet, IndexedLineSet, or PointSet node.
Color nodes are only used to specify multiple colors for a single piece of geometry, such as a different color for each face or vertex of an IndexedFaceSet. A Material node is used to specify the overall material parameters of a lighted geometry. If both a Material and a Color node are specified for a geometry, the colors should ideally replace the diffuse component of the material.
Textures take precedence over colors; specifying both a Texture and a Color node for a geometry will result in the Color node being ignored.
Note that some browsers may not support this functionality, in which case an average color should be computed and used instead.
Color { exposedField MFColor rgb [] }
This node defines a set of 3D coordinates to be used in the coord field of some geometry nodes (such as IndexedFaceSet, IndexedLineSet, and PointSet).
Coordinate3 { exposedField MFVec3f point [] }
This node defines a set of 3D surface normal vectors to be used in the normal field of some geometry nodes (IndexedFaceSet, ElevationGrid). This node contains one multiple-valued field that contains the normal vectors. Normals should be unit-length or results are undefined.
To save network bandwidth, it is expected that implementations will be able to automatically generate appropriate normals if none are given. However, the results will vary from implementation to implementation.
Normal { exposedField MFVec3f vector [] }
This node defines a set of 2D coordinates to be used in the texCoord field to map textures to the vertices of some geometry nodes (IndexedFaceSet and ElevationGrid).
Texture coordinates range from 0 to 1 across the texture image. The horizontal coordinate, S, is specified first, followed by the vertical coordinate, T.
TextureCoordinate2 { exposedField MFVec2f point [] }
The Appearance node occurs only within the appearance field of a Shape node. The value for any of the fields in this node can be NULL. However, if the field contains anything, it must contain one specific type of node. Specifically, the material field, if specified, must contain a Material node. The texture field, if specified, must contain a Texture2 node. The textureTransform field, if specified, must contain a Texture2Transform node.
Appearance { exposedField SFNode material Material {} exposedField SFNode texture NULL exposedField SFNode textureTransform NULL }
The Material, Texture2, and Texture2Transform appearance property nodes are always contained within fields of an Appearance node. The FontStyle node is always contained in the fontStyle field of a Text node.
The FontStyle node, which may only appear in the fontStyle field of a Text node, defines the size, font family, and style of the text font, as well as the direction of the text strings and any specific language rendering techniques that must be used for non-English text.
The size field specifies the height (in object space units) of glyphs rendered and determines the vertical spacing of adjacent lines of text. All subsequent strings advance in either X or Y by -(size*spacing). (See the Text node for a description of the spacing field.)
Font Family and Style: Font attributes are defined with the family and style fields. It is up to the browser to assign specific fonts to the various attribute combinations.
The family field contains an SFString value that can be "SERIF" (the default) for a serif font such as Times Roman; "SANS" for a sans-serif font such as Helvetica; or "TYPEWRITER" for a fixed-pitch font such as Courier.
The style field contains an SFString value that can be an empty string (the default); "BOLD" for boldface type; "ITALIC" for italic type; or "BOLD ITALIC" for bold and italic type.
Direction: The horizontal, leftToRight, and topToBottom fields indicate the direction of the text. The horizontal field indicates whether the text is horizontal (specified as TRUE, the default) or vertical (FALSE). The leftToRight field indicates whether the text progresses from left to right (specified as TRUE, the default) or from right to left (FALSE). The topToBottom field indicates whether the text progresses from top to bottom (specified as TRUE, the default), or from bottom to top (FALSE).
The justify field of the Text node determines where the text is positioned in relation to the origin (0,0,0) of the local coordinate system. The values for the justify field are "BEGIN", "MIDDLE", and "END". For a left-to-right direction (leftToRight = TRUE), "BEGIN" would specify left-justified text, "MIDDLE" would specify centered text, and "END" would specify right-justified text.
For horizontal text (horizontal = TRUE), the first line of text is positioned with its baseline (bottom of capital letters) at Y = 0. The text is positioned on the positive side of the X origin when leftToRight is TRUE and justify is "BEGIN"; the same positioning is used when leftToRight is FALSE and justify is "END". The text is on the negative side of the X origin when leftToRight is TRUE and justify is "END" (and when leftToRight is FALSE and justify is "BEGIN"). For justify = "MIDDLE" and horizontal = TRUE, each string will be centered at X = 0.
For vertical text (horizontal is FALSE), the first line of text is positioned with the left side of the glyphs along the Y axis. When topToBottom is TRUE and justify is "BEGIN" (or when topToBottom is FALSE and justify is "END"), the text is positioned with the top left corner at the origin. When topToBottom is TRUE and justify is "END" (or when topToBottom is FALSE and justify is "BEGIN"), the bottom left is at the origin. For justify = "MIDDLE" and horizontal = FALSE, the text is centered vertically at X = 0.
In the following tables, each small cross indicates where the X and Y axes should be in relation to the text.
horizontal = TRUE:
horizontal = FALSE:
Text Language: There are many languages in which the proper rendering of the text requires more than just a sequence of glyphs. The language field allows the author to specify which, if any, language specific rendering techniques to use. For simple languages, such as English, this field may be safely ignored.
The tag used to specify languages will follow RFC1766, "Tags for the Identification of Languages." This RFC specifies that a language tag may simply be a two-letter ISO 639 tag, for example "en" for English, "ja" for Japanese, or "sv" for Swedish. This may be optionally followed by a hyphen and a two-letter country code from ISO 3166. American English, for instance, could be specified as "en-US".
FontStyle { field SFFloat size 1.0 field SFString family "SERIF" # "SERIF", "SANS", "TYPEWRITER" field SFString style "" # "BOLD", "ITALIC", "BOLD ITALIC" field SFBool horizontal TRUE field SFBool leftToRight TRUE field SFBool topToBottom TRUE field SFString language "" }
The Material node defines surface material properties for associated geometry nodes.
The fields in the Material node determine the way light reflects off an object to create color:
The lighting parameters defined by the Material node are the same parameters defined by the OpenGL lighting model. For a rigorous mathematical description of how these parameters should be used to determine how surfaces are lit, see the description of lighting operations in the OpenGL Specification. Also note that OpenGL specifies the specular exponent as a non-normalized 0-128 value, which is specified as a normalized 0-1 value in VRML (simply multiplying the VRML value by 128 to translate to the OpenGL parameter).
For rendering systems that do not support the full OpenGL lighting model, the following simpler lighting model is recommended:
A transparency value of 0 is completely opaque, a value of 1 is completely transparent. Browsers need not support partial transparency, but should support at least fully transparent and fully opaque surfaces, treating transparency values >= 0.5 as fully transparent.
Issues for Low-End Rendering Systems. Many low-end PC rendering systems are not able to support the full range of the VRML material specification. For example, many systems do not render individual red, green and blue reflected values as specified in the specularColor field. The following table describes which Material fields are typically supported in popular low-end systems and suggests actions for browser implementors to take when a field is not supported.
Field Supported? Suggested Action ambientIntensity No Ignore diffuseColor Yes Use specularColor No Ignore emissiveColor No Use in place of diffuseColor if != 0 0 0 shininess Yes Use transparency No Ignore
Rendering systems which do not support specular color may nevertheless support a specular intensity. This should be derived by taking the dot product of the specified RGB specular value with the vector [.32 .57 .11]. This adjusts the color value to compensate for the variable sensitivity of the eye to colors.
Likewise, if a system supports ambient intensity but not color, the same thing should be done with the ambient color values to generate the ambient intensity. If a rendering system does not support per-object ambient values, it should set the ambient value for the entire scene at the average ambient value of all objects.
It is also expected that simpler rendering systems may be unable to support both diffuse and emissive objects in the same world.
Material { exposedField SFColor diffuseColor 0.8 0.8 0.8 exposedField SFFloat ambientIntensity 0.2 exposedField SFColor specularColor 0 0 0 exposedField SFColor emissiveColor 0 0 0 exposedField SFFloat shininess 0.2 exposedField SFFloat transparency 0 }
The Texture2 node defines a texture map and parameters for that map.
The texture can be read from the URL specified by the filename field. To turn off texturing, set the filename field to have no values ([]). Implementations should support the JPEG and PNG image file formats. Support for the GIF format and for MPEG is also recommended. If MPEG is supported, the fraction field specifies which frame of the sequence should be used as the texture. A fraction of 0 indicates that the first frame is displayed, and a fraction of 1 indicates that the last frame is displayed. Connecting this field to the fraction eventOut of a TimeSensor allows the texture to be animated by the MPEG movie.
Textures can also be specified inline by setting the image field to contain the texture data. Supplying both image and filename fields will result in undefined behavior.
Texture images may be one component (greyscale), two component (greyscale plus transparency), three component (full RGB color), or four-component (full RGB color plus transparency). An ideal VRML implementation will use the texture image to modify the diffuse color and transparency of an object's material (specified in a Material node), then perform any lighting calculations using the rest of the object's material properties with the modified diffuse color to produce the final image. The texture image modifies the diffuse color and transparency depending on how many components are in the image, as follows:
Browsers may approximate this ideal behavior to increase performance. One common optimization is to calculate lighting only at each vertex and combining the texture image with the color computed from lighting (performing the texturing after lighting). Another common optimization is to perform no lighting calculations at all when texturing is enabled, displaying only the colors of the texture image.
The repeatS and repeatT fields specify how the texture wraps in the S and T directions. If repeatS is TRUE (the default), the texture map is repeated outside the 0-to-1 texture coordinate range in the S direction so that it fills the shape. If repeatS is FALSE, the texture coordinates are clamped in the S direction to lie within the 0-to-1 range. The repeatT field is analogous to the repeatS field.
Texture2 { exposedField MFString filename [ ] exposedField SFImage image 0 0 0 exposedField SFFloat fraction 0 field SFBool repeatS TRUE field SFBool repeatT TRUE }
The Texture2Transform node defines a 2D transformation that is applied to texture coordinates. This node is used only in the textureTransform field of the Appearance node and affects the way textures are applied to the surfaces of the associated Geometry node. The transformation consists of (in order) a nonuniform scale about an arbitrary center point, a rotation about that same point, and a translation. This allows a user to change the size and position of the textures on shapes.
Texture2Transform { field SFVec2f translation 0 0 field SFFloat rotation 0 field SFVec2f scale 1 1 field SFVec2f center 0 0 }
Geometric sensor nodes are children of a Transform node. They generate events with respect to the Transform's coordinate system and children.
Proximity sensors are nodes that generate events when the viewpoint enters, exits, and moves inside a space. A proximity sensor can be activated or deactivated by sending it an enabled event with a value of TRUE or FALSE.
A BoxProximitySensor generates isActive TRUE/FALSE events as the viewer enters/exits the region defined by its center and size fields. Ideally, implementations will interpolate viewpoint positions and timestamp the isActive events with the exact time the viewpoint first intersected the volume.
A BoxProximitySensor with a (0 0 0) size field (the default) will sense the region defined by the objects in its coordinate system. The axis-aligned bounding box of the Transform containing the BoxProximitySensor should be computed and used instead of the center and size fields in this case.
position and orientation events giving the position and orientation of the viewer in the BoxProximitySensor's coordinate system are generated when either the user or the coordinate system of the sensor moves and the viewer is inside the region being sensed.
Multiple BoxProximitySensors will generate events at the same time if the regions they are sensing overlap. Unlike ClickSensors, there is no notion of a BoxProximitySensor lower in the scene graph "grabbing" events.
A BoxProximitySensor that surrounds the entire world will have an enter time equal to the time that the world was entered and can be used to start up animations or behaviors as soon as a world is loaded.
BoxProximitySensor { exposedField SFVec3f center 0 0 0 exposedField SFVec3f size 0 0 0 exposedField SFBool enabled TRUE eventOut SFBool isActive eventOut SFVec3f position eventOut SFRotation orientation }
A ClickSensor tracks the pointing device with respect to its sibling nodes. This sensor can be activated or deactivated by sending it an enabled event with a value of TRUE or FALSE.
The ClickSensor generates events as the pointing device passes over the geometry defined by nodes that are children of the same Group or Transform as the ClickSensor. When the pointing device is over the geometry, this sensor will also generate button press and release events for the button associated with the pointing device. Typically, the pointing device is a mouse and the button is a mouse button.
isOver TRUE/FALSE events are generated as the pointing device moves over the ClickSensor's geometry. When the pointing device is unobstructed by any other surface and moves on top of the ClickSensor's geometry, an isOver TRUE event should be generated. When the pointing device moves and is no longer on top of the geometry, or some other geometry is obstructing the ClickSensor's geometry, an isOver FALSE event should be generated.
All of these events are generated only when the pointing device moves or the user clicks the button; events are not generated if the geometry itself is animating and moving underneath the pointing device.
If the user presses the button associated with the pointing device while the cursor is located over its geometry, the ClickSensor will grab all further motion events from the pointing device until the button is released (other Click or Drag sensors will not generate events during this time). isActive TRUE/FALSE events are generated along with the press/release events. Motion of the pointing device while it has been grabbed by a ClickSensor is referred to as a "drag".
As the user drags the cursor over the ClickSensor's geometry, the point on that geometry which lies directly underneath the cursor is determined. When isOver and isActive are TRUE, hitPoint, hitNormal, and hitTexCoord events are generated whenever the pointing device moves. hitPoint events contain the 3D point on the surface of the underlying geometry, given in the ClickSensor's coordinate system. hitNormal events contain the surface normal at the hitPoint. hitTexCoord events contain the texture coordinates of that surface at the hitPoint, which can be used to support the 3D equivalent of an image map.
ClickSensor { exposedField SFBool enabled TRUE eventOut SFBool isOver eventOut SFBool isActive eventOut SFVec3f hitPoint eventOut SFVec3f hitNormal eventOut SFVec2f hitTexCoord }
The CylinderSensor maps dragging motion into a rotation around the Y axis of its local space. The feel of the rotation is as if you were turning a rolling pin.
CylinderSensor { exposedField SFFloat minAngle 0 exposedField SFFloat maxAngle 0 exposedField SFBool enabled TRUE eventOut SFVec3f trackPoint eventOut SFRotation rotation eventOut SFBool onCylinder }
minAngle and maxAngle may be set to clamp rotation events to a range of values (measured in radians about the Y axis). If minAngle is greater than maxAngle, rotation events are not clamped.
Upon the initial click down on the CylinderSensor's geometry, the specific point clicked determines the radius of the cylinder used to map pointing device motion while dragging. trackPoint events always reflect the unclamped drag position on the surface of this cylinder, or in the plane perpendicular to the view vector if the cursor moves off this cylinder. An onCylinder TRUE event is generated at the initial click down; thereafter, onCylinder FALSE/TRUE events are generated if the pointing device is dragged off/on the cylinder.
The DiskSensor maps dragging motion into a rotation around the Z axis of its local space. The feel of the rotation is as if you were scratching on a record turntable.
DiskSensor { exposedField SFFloat minAngle 0 exposedField SFFloat maxAngle 0 exposedField SFBool enabled TRUE eventOut SFVec3f trackPoint eventOut SFRotation rotation }
minAngle and maxAngle may be set to clamp rotation events to a range of values as measured in radians about the Z axis. If minAngle is greater than maxAngle, rotation events are not clamped. trackPoint events provide unclamped drag position in the XY plane.
The PlaneSensor maps dragging motion into a translation in two dimensions, in the XY plane of its local space.
PlaneSensor { exposedField SFVec2f minPosition 0 0 exposedField SFVec2f maxPosition -1 -1 exposedField SFBool enabled TRUE eventOut SFBool isOver eventOut SFBool isActive eventOut SFVec3f hitPoint eventOut SFVec3f hitNormal eventOut SFVec2f hitTexCoord eventOut SFVec3f trackPoint eventOut SFVec3f translation }
minPosition and maxPosition may be set to clamp translation events to a range of values as measured from the origin of the XY plane. If the X or Y component of minPosition is greater than the corresponding component of maxPosition, translation events are not clamped in that dimension. If the X or Y component of minPosition is equal to the corresponding component of maxPosition, that component is constrained to the given value; this technique provides a way to implement a line sensor that maps dragging motion into a translation in one dimension. (There is no built-in line sensor node.)
trackPoint events provide unclamped drag position in in the XY plane.
The SphereSensor maps dragging motion into a free rotation about its center. The feel of the rotation is as if you were rolling a ball.
SphereSensor { exposedField SFBool enabled TRUE eventOut SFVec3f trackPoint eventOut SFRotation rotation eventOut SFBool onSphere }
The free rotation of the SphereSensor is always unclamped.
Upon the initial click down on the SphereSensor's geometry, the point hit determines the radius of the sphere used to map pointing device motion while dragging. trackPoint events always reflect the unclamped drag position on the surface of this sphere, or in the plane perpendicular to the view vector if the cursor moves off of the sphere. An onSphere TRUE event is generated at the initial click down; thereafter, onSphere FALSE/TRUE events are generated if the pointing device is dragged off/on the sphere.
The Background, NavigationInfo, Script, TimeSensor, and WorldInfo nodes are not part of the world's transformational hierarchy.
The Background, NavigationInfo, and WorldInfo nodes are global nodes that affect everything in the scene. They can be used anywhere in the scene description and may appear in fields of a Script node. If more than one Background node appears in a file, the first Background node read is the one that is used; the same rule applies to the NavigationInfo and WorldInfo nodes.
The Background node is used to specify a color-ramp backdrop that simulates ground and sky planes, as well as an environment texture, or panorama, that is placed behind all geometry in the scene and in front of the backdrop.
The backdrop is conceptually a sphere with an infinite radius, painted with a smooth gradation of ground colors (starting with a circle straight downward and rising in concentric bands up to the horizon) and a separate gradation of sky colors (starting with a circle straight upward and falling in concentric bands down to the horizon). (It's acceptable to implement the backdrop as a cube painted in concentric square rings instead of as a sphere.) The groundRanges field is a list of floating point values that indicate the cutoff for each groundColor. Its implicit initial value is 0 radians (downward), and the final value given indicates the elevation angle of the horizon, where the ground color ramp and the sky color ramp meet. The skyRanges field implicitly starts at 0 radians (upward) and works its way down to pi radians. If groundColors is NULL, no ground colors are used.
The pos/neg/X/Y/Z fields define a background panorama, between the backdrop and the world's geometry. The panorama consists of a six images, each of which is mapped onto the faces of a cube surrounding the world. Transparency values in the panorama images specify that the panorama is transparent in particular places, allowing the groundColors and skyColors to show through. (Often, the posY and negY images will not be specified, to allow sky and ground to show. The other four images may depict mountains or other distant scenery.) By default, there is no panorama.
The first Background node found during reading of the world is used as the initial background. Subsequent Background nodes are ignored. The background may be changed by Script node API calls.
Ground colors, sky colors, and panoramic images do not translate with respect to the viewer, though they do rotate with respect to the viewer. That is, the viewer can never get any closer to the background, but can turn to examine all sides of the panorama cube, and can look up and down to see the concentric rings of ground and sky (if visible).
Background{ exposedfield MFColor groundColor [ 0.14 0.28 0.00, # light green 0.09 0.11 0.00 ]# to dark green exposedField MFFloat groundRange [ .785 ] # horizon = 45 degrees exposedField MFColor skyColor [ 0.02 0.00 0.26 # twilight blue 0.02 0.00 0.65 ]# to light blue exposedField MFFloat skyRange [ .785 ] # horizon = 45 degrees exposedField MFString posX [ ] exposedField MFString negX [ ] exposedField MFString posY [ ] exposedField MFString negY [ ] exposedField MFString posZ [ ] exposedField MFString negZ [ ] }
The NavigationInfo node contains information for the viewer through several fields: type, speed, size, visibilityLimit, and headlight.
The type field specifies a navigation paradigm to use. The types that all VRML viewers should support are "WALK", "EXAMINE", "FLY", and "NONE". A walk viewer is used for exploring a virtual world. The viewer should (but is not required to) have some notion of gravity in this mode. A fly viewer is similar to walk except that no notion of gravity should be enforced. There should still be some notion of "up" however. An examine viewer is typically used to view individual objects and often includes (but does not require) the ability to spin the object and move it closer or further away. The "none" choice removes all viewer controls. The user navigate using only controls provided in the scene, such as guided tours. Also allowed are browser specific viewer types. These should include a suffix as described in the naming conventions section to prevent conflicts. The type field is multi-valued so that authors can specify fallbacks in case a browser does not understand a given type.
The speed is the rate at which the viewer travels through a scene in units per second. Since viewers may provide mechanisms to travel faster or slower, this should be the default or average speed of the viewer. In an examiner viewer, this only makes sense for panning and dollying--it should have no effect on the rotation speed.
The size field specifies parameters to be used in determining the camera dimensions for the purpose of collision detection and terrain following if the viewer type allows these. It is a multi-value field to allow several dimensions to be specified. The first value should be the allowable distance between the user's position and any collision geometry (as specified by Collision) before a collision is detected. The second should be the height above the terrain the camera should be maintained. The third should be the height of the tallest object over which the camera can "step". This allows staircases to be build with dimensions that can be ascended by all browsers. Additional values are browser dependent and all values may be ignored but if a browser interprets these values the first 3 should be interpreted as described above.
The visibilityLimit field sets the furthest distance the viewer is able to see. The browser may clip all objects beyond this limit, fade them into the background or ignore this field. A value of 0.0 (the default) indicates an infinite visibility limit.
The headlight field specifies whether a browser should turn a headlight on. A headlight is a directional light that always points in the direction the user is looking. Setting this field to TRUE allows the browser to provide a headlight, possibly with user interface controls to turn it on and off. Scenes that enlist precomputed lighting (e.g. radiosity solutions) can specify the headlight off here. The headlight should have intensity 1, color 1 1 1, and direction 0 0 -1.
The first NavigationInfo node found during reading of the world supplies the initial navigation parameters. Subsequent NavigationInfo nodes are ignored. The browser may be told to use a different NavigationInfo node using Script node API calls.
NavigationInfo { exposedField MFString type "WALK" exposedField SFFloat speed 1.0 exposedField MFFloat size 1.0 exposedField MFFloat visibilityLimit 0.0 exposedField SFBool headlight TRUE }
Files that describe node behavior are referenced through a Script node. Each Script node has associated code in some programming language that is executed to carry out the Script node's function. That code will be referred to as "the script" in the rest of this description.
A Script node's scriptType field describes which scripting language is being used. The contents of the behavior field depends on which scripting language is being used. Typically the behavior field will contain URLs/URNs from which the script should be fetched.
Each scripting language supported by a browser defines bindings for the following functionality. See Appendices A and B for the standard Java and C language bindings.
The script is created, and any language-dependent or user-defined initialization is performed. The script should be able to receive and process events that are sent to it. Each event that can be received must be declared in the Script node using the same syntax as is used in a prototype definition:
eventIn type name
"eventIn" is a VRML keyword. The type can be any of the standard VRML field types, and name must be an identifier that is unique for this Script node.
The Script node should be able to generate events in response to the incoming events. Each event that can be generated must be declared in the Script node using the following syntax:
eventOut type name
If the Script node's mustEvaluate field is FALSE, the browser can delay sending input events to the script until its outputs are needed by the browser. If the mustEvaluate field is TRUE, the browser should send input events to the script as soon as possible, regardless of whether the outputs are needed. The mustEvaluate field should be set to TRUE only if the Script has effects that are not known to the browser (such as sending information across the network); otherwise, poor performance may result.
An example of a Script node is
Script { behavior "http://foo.com/bar.class" ; MFSTRING scriptType "javabc" eventIn SFString name eventIn SFBool selected eventOut SFString lookto field SFInt32 currentState 0 field SFBool mustEvaluate TRUE }
The script should be able to read and write the fields of the corresponding Script node. The Script node is responsible for implementing the behavior of exposed fields; the browser will not automatically update the value of an exposed field and will not automatically generate an eventOut when an exposed field changes.
Once the script has access to some VRML node (via an SFNode or MFNode value either in one of the Script node's fields or passed in as an eventIn), the script should be able to read the contents of that node's exposed field. If the Script node's directOutputs field is TRUE, the script may also send events directly to any node to which it has access.
A script should also be able to communicate directly with the VRML browser to get and set global information such as navigation information, the current time, the current world URL, and so on.
It is expected that all other functionality (such as networking capabilities, multi-threading capabilities, and so on) will be provided by the scripting language.
Script { field MFString behavior [ ] field SFString scriptType "" field SFBool mustEvaluate FALSE field SFBool directOutputs FALSE # And any number of: eventIn eventTypeName eventName field fieldTypeName fieldName initialValue exposedField fieldTypeName fieldName initialValue eventOut eventTypeName eventName }
TimeSensors generate events as time passes. TimeSensors remain inactive until their startTime is reached. At the first simulation tick when "now" is greater than or equal to startTime, the TimeSensor will begin generating time and fraction events, which may be routed to other nodes to drive continuous animation or simulated behaviors.
The length of time a TimeSensor generates events is controlled using cycleInterval and cycleCount; a TimeSensor stops generating time events at time startTime+cycleInterval*cycleCount. The time events contain times relative to startTime, so they will start at zero and increase up to cycleInterval*cycleCount.
The forward and back fields control the mapping of time to fraction values. If forward is TRUE and back is FALSE (the default), fraction events will rise from 0.0 to 1.0 over each interval. If forward is FALSE and back is TRUE, the opposite will happen (fraction events will fall from 1.0 to 0.0 during each interval). If they are both TRUE, fraction events will alternate 0.0 to 1.0, 1.0 to 0.0, reversing direction on each interval. If they are both FALSE, then fraction and time events will be generated only once per cycle (and the fraction values generated will always be 0).
pauseTime may be set to interrupt the progress of a TimeSensor. If pauseTime is greater than startTime, time and fraction events will not be generated after the pause time. pauseTime is ignored if it is less than or equal to startTime.
A TimeSensor will generate an isActive TRUE event when it begins generating times, and will generate an isActive FALSE event when it stops generating times (either because pauseTime was reached or because time startTime+cycleInterval*cycleCount was reached).
If cycleCount is is less than or equal to 0, the TimeSensor will continue to tick continuously, as if the cycleCount is infinity. This use of the TimeSensor should be used with caution, since it incurs continuous overhead on the simulation.
Setting cycleCount to 1 and cycleInterval to 0 will result in a single event being generated at startTime; this can be used to build an alarm that goes off at some point in the future.
No guarantees are made with respect to how often a TimeSensor will generate time events, but TimeSensors are guaranteed to generate final fraction and time events at or after time (startTime+cycleInterval*cycleCount) if pauseTime is less than or equal to startTime.
TimeSensor { exposedField SFTime startTime 0 exposedField SFTime pauseTime 0 exposedField SFTime cycleInterval 1 exposedField SFInt32 cycleCount 1 exposedField SFBool forward TRUE exposedField SFBool back FALSE eventOut SFBool isActive eventOut SFTime time eventOut SFFloat fraction }
The WorldInfo node contains information about the world. The title of the world is stored in its own field, allowing browsers to display it--for instance, in their window border. Any other information about the world can be stored in the info field--for instance, the scene author, copyright information, and public domain information.
WorldInfo { field SFString title "" field MFString info [ ] }
A Group node is a lightweight grouping node that can contain any number of children. It is equivalent to a Transform node, without the transformation fields.
PROTO Group [ field SFVec3f bboxCenter 0 0 0 field SFVec3f bboxSize 0 0 0 exposedField MFNode children [ ] eventIn MFNode add_children eventIn MFNode remove_children ] { Transform { bboxCenter IS bboxCenter bboxSize IS bboxSize children IS children add_children IS add_children remove_children IS remove_children } }
The LOD node is used to allow browsers to switch between various representations of objects automatically. The levels field contains nodes that represent the same object or objects at varying levels of detail, from highest detail to lowest.
First the distance is calculated from the viewpoint, transformed into the local coordinate space of the LOD node (including any scaling transformations), to the center point of the LOD. If the distance is less than the first value in the range field, then the first level of the LOD is drawn. If between the first and second values in the range field, the second level is drawn, and so on.
If there are N values in the range field, the LOD should have N+1 nodes in its level field. Specifying too few levels will result in the last level being used repeatedly for the lowest levels of detail; if too many levels are specified, the extra levels will be ignored. The exception to this rule is to leave the range field empty, which is a hint to the browser that it should choose a level automatically to maintain a constant display rate.
Each value in the range field should be greater than the previous value; otherwise results are undefined. Not specifying any values in the range field (the default) is a special case that indicates that the browser may decide which child to draw to optimize rendering performance.
Authors should set LOD ranges so that the transitions from one level of detail to the next are barely noticeable. Browsers may adjust which level of detail is displayed to maintain interactive frame rates, to display an already-fetched level of detail while a higher level of detail (contained in a WWWInline node) is fetched, or might disregard the author-specified ranges for any other implementation-dependent reason. Authors should not use LOD nodes to emulate simple behaviors, because the results will be undefined. For example, using an LOD node to make a door appear to open when the user approaches probably will not work in all browsers. Use a ProximitySensor instead.
For best results, specify ranges only where necessary, and nest LOD nodes with and without ranges. For example:
LOD { range [100, 1000] levels [ LOD { levels [ Transform { ... detailed version... }, DEF LoRes Transform { ... less detailed version... } ] }, USE LoRes, Shape { } # Display nothing ] }
In this example, the browser is free to choose either a detailed or a less-detailed version of the object when the viewer is closer than 100 meters. The browser should display the less-detailed version of the object if the viewer is between 100 and 1,000 meters and should display nothing at all if the viewer is farther than 1,000 meters. Browsers should try to honor the hints given by authors, and authors should try to give browsers as much freedom as they can to choose levels of detail based on performance.
PROTO LOD [ field MFFloat range [ ] field SFVec3f center 0 0 0 exposedField MFNode levels [ ] ] { DEF F Transform { DEF PS BoxProximitySensor { center IS center } } DEF LODSCRIPT Script { eventOut MFNode remove eventOut MFNode add eventOut SFVec3f maxRange eventIn SFVec3f viewerPosition field MFFloat range IS range field MFNode levels IS levels # # Script must: # -- set maxRange to maximum value in range[] field # -- get viewerPosition, figure out which level should # be seen, add/remove appropriate children } ROUTE F.position TO LODSCRIPT.viewerPosition ROUTE LODSCRIPT.maxRange TO PS.size ROUTE LODSCRIPT.remove TO F.removeChildren ROUTE LODSCRIPT.add TO F.addChildren }
The Switch grouping node traverses zero or one of its children (which are specified in the choices field).
The whichChild field specifies the index of the child to traverse, where the first child has index 0. If whichChild is less than zero or greater than the number of nodes in the choices field then nothing is chosen.
PROTO Switch [ exposedField SFInt32 whichChild -1 exposedField MFNode choices [ ] ] { DEF F Transform { } DEF SWITCHSCRIPT Script { eventOut MFNode remove eventOut MFNode add exposedField SFInt32 whichChild IS whichChild exposedField MFNode choices IS choices # # Script must: # -- keep whichChild up-to-date # -- figure out which child should # be seen when whichChild changes, add/remove # appropriate children } ROUTE SWITCHSCRIPT.remove TO F.removeChildren ROUTE SWITCHSCRIPT.add TO F.addChildren }
The WWWAnchor grouping node causes some data to be fetched over the network when any of its children are chosen. If the data pointed to is a VRML world, then that world is loaded and displayed instead of the world of which the WWWAnchor is a part. If another data type is fetched, it is up to the browser to determine how to handle that data; typically, it will be passed to an appropriate, already-open (or newly spawned) general Web browser.
Exactly how a user "chooses" a child of the WWWAnchor is up to the VRML browser; typically, clicking on one of its children with the mouse will result in the new scene replacing the current scene. A WWWAnchor with an empty ("") name does nothing when its children are chosen.
The name is an arbitrary set of URLs. If multiple URLs are presented, this expresses a descending order of preference. A browser may display a lower-preference URL if the higher-order file is not available. See the section on URLs and URNs.
The description field in the WWWAnchor allows for a friendly prompt to be displayed as an alternative to the URL in the name field. Ideally, browsers will allow the user to choose the description, the URL, or both to be displayed for a candidate WWWAnchor.
A WWWAnchor may be used to take the viewer to a particular viewpoint in a virtual world by specifying a URL ending with "#viewpointName", where "viewpointName" is the name of a viewpoint defined in the world. For example:
WWWAnchor { name "http://www.school.edu/vrml/someScene.wrl#OverView" Cube { } }
specifies an anchor that puts the viewer in the "someScene" world looking from the viewpoint named "OverView" when the Cube is chosen. If no world is specified, then the current scene is implied; for example:
WWWAnchor { name "#Doorway" children [ Sphere { } ] }
will take the viewer to the viewpoint defined by the "Doorway" viewpoint in the current world when the sphere is chosen.
PROTO WWWAnchor [ field MFString name [ ] field SFString description "" exposedField MFNode children [ ] ] { Group { children [ DEF CS ClickSensor { }, Group { children IS children } ] } DEF ASCRIPT Script { mustEvaluate TRUE field MFString url IS name eventIn SFBool loadWorld # # Script must load new world (using loadWorld() Script API) # when ClickSensor is clicked # } ROUTE CS.isActive TO ASCRIPT.loadWorld }
The WWWInline node is a light-weight grouping node like Group that reads its children from anywhere in the World Wide Web. Exactly when its children are read is not defined; reading the children may be delayed until the WWWInline is actually displayed. A WWWInline with an empty name does nothing. The name is an arbitrary set of URLs.
A WWWInline's URLs must refer to a valid VRML file that contains a grouping or leaf node. Referring to non-VRML files or VRML files that do not contain a grouping or leaf node is undefined.
If multiple URLs are specified, then this expresses a descending order of preference. A browser may display a URL for a lower-preference file while it is obtaining, or if it is unable to obtain, the higher-preference file. See also the section on URLs and URNs.
If the WWWInline's bboxSize field specifies a non-empty bounding box (a bounding box is non-empty if at least one of its dimensions is greater than zero), then the WWWInline's object-space bounding box is specified by its bboxSize and bboxCenter fields. This allows an implementation to quickly determine whether or not the contents of the WWWInline might be visible. This is an optimization hint only; if the true bounding box of the contents of the WWWInline is different from the specified bounding box, results will be undefined.
PROTO WWWInline [ field MFString name [ ] field SFVec3f bboxSize 0 0 0 field SFVec3f bboxCenter 0 0 0 ] { DEF G Group { bboxSize IS bboxSize bboxCenter IS bboxCenter } DEF ISCRIPT Script { field MFString url IS name eventOut MFNode children # # Script's initialization code should call browser's # createVrmlFromURL() function, then send resulting node out to # children eventOut. } ROUTE ISCRIPT.children TO G.addChildren }
The PointSound node defines a sound source located at a specific 3D location. The name field specifies a URL from which the sound is read. Implementations should support at least the ??? ??? sound file formats. Streaming sound files may be supported by browsers; otherwise, sounds should be loaded when the sound node is loaded. Browsers may limit the maximum number of sounds that can be played simultaneously.
If multiple URLs are specified, then this expresses a descending order of preference. A browser may use a URL for a lower-preference file while it is obtaining, or if it is unable to obtain, the higher-preference file. See also the section on URNs.
The description field is a textual description of the sound, which may be displayed in addition to or in place of playing the sound.
The intensity field adjusts the volume of each sound source; an intensity of 0 is silence, and an intensity of 1 is whatever intensity is contained in the sound file.
The sound source has a radius specified by the minRadius field. When the viewpoint is within this radius, the sound's intensity (volume) is constant, as indicated by the intensity field. Outside the minRadius, the intensity drops off to zero at a distance of maxRadius from the source location. If the two radii are equal, the drop-off is sharp and sudden. Otherwise, the drop-off should be proportional to the square of the distance of the viewpoint from the minRadius.
Browsers may also support spatial localizations of sound. However, within minRadius, localization should not occur, so intensity is constant in all channels. Between minRadius and maxRadius, the sound location should be the point on the minRadius sphere that is closest to the current viewpoint. This ensures a smooth change in location when the viewpoint leaves the minRadius sphere. Note also that an ambient sound can therefore be created by using a large minRadius value.
The loop field specifies whether or not the sound is constantly repeated. By default, the sound is played only once. If the loop field is FALSE, the sound has length "length," which is not specified in the VRML file but is implicit in the sound file pointed to by the URL in the name field. If the loop field is TRUE, the sound has an infinite length.
The start field specifies the time at which the sound should start playing. The pause field may be used to make a sound stop playing some time after it has started.
With the start time "start," pause time "pause," and current time "now," the rules are as follows:
if: now < start: OFF else if: now >+ start+length: OFF else if: (pause> start) AND (start <= now < pause) : ON else: ON
Whenever start, pause, or "now" changes, the above rules need to be applied to figure out if the sound is playing. If it is, then it should be playing the bit of sound at (now - start) or, if it is looping, fmod( now - start, realLength).
A sound's location in the scene graph determines its spatial location (the sound's location is transformed by the current transformation) and whether or not it can be heard. A sound can only be heard while it is part of the traversed scene; sound nodes underneath LOD nodes or Switch nodes will not be audible unless they are traversed. If it is later part of the traversal again, the sound picks up where it would have been had it been playing continuously.
PROTO PointSound [ field MFString name [ ] field SFString description "" exposedField SFFloat intensity 1 exposedField SFVec3f location 0 0 0 exposedField SFFloat minRange 10 exposedField SFFloat maxRange 10 exposedField SFBool loop FALSE exposedField SFTime start 0 exposedField SFTime pause 0 ] { DirectedSound { name IS name description IS description intensity IS intensity location IS location loop IS loop start IS start pause IS pause minFront IS minRange minBack IS minRange maxFront IS maxRange maxBack IS maxRange } }
This node represents a simple cone whose central axis is aligned with the Y axis. By default, the cone is centered at (0,0,0) and has a size of -1 to +1 in all three directions. The cone has a radius of 1 at the bottom and a height of 2, with its apex at 1 and its bottom at -1.
The cone has two parts: the side and the bottom. Each part has an associated SFBool field that specifies whether it is visible (TRUE) or invisible (FALSE).
When a texture is applied to a cone, it is applied differently to the sides and bottom. On the sides, the texture wraps counterclockwise (from above) starting at the back of the cone. The texture has a vertical seam at the back, intersecting the YZ plane. For the bottom, a circle is cut out of the texture square and applied to the cone's base circle. The texture appears right side up when the top of the cone is rotated towards the -Z axis.
PROTO Cone [ field SFFloat bottomRadius 1 field SFFloat height 2 field SFBool side TRUE field SFBool bottom TRUE ] { ... equivalent to an IndexedFaceSet plus generator script... }
This node represents a cuboid aligned with the coordinate axes. By default, the cube is centered at (0,0,0) and measures 2 units in each dimension, from -1 to +1. A cube's width is its extent along its object-space X axis, its height is its extent along the object-space Y axis, and its depth is its extent along its object-space Z axis.
Textures are applied individually to each face of the cube; the entire texture goes on each face. On the front, back, right, and left sides of the cube, the texture is applied right side up. On the top, the texture appears right side up when the top of the cube is tilted toward the user. On the bottom, the texture appears right side up when the top of the cube is tilted towards the -Z axis.
PROTO Cube [ field SFFloat width 2 field SFFloat height 2 field SFFloat depth 2 ] { ... equivalent to an IndexedFaceSet plus generator script... }
This node represents a simple capped cylinder centered around the Y axis. By default, the cylinder is centered at (0,0,0) and has a default size of -1 to +1 in all three dimensions. You can use the radius and height fields to create a cylinder with a different size.
The cylinder has three parts: the side, the top (Y = +1) and the bottom (Y = -1). Each part has an associated SFBool field that indicates whether the part is visible (TRUE) or invisible (FALSE).
When a texture is applied to a cylinder, it is applied differently to the sides, top, and bottom. On the sides, the texture wraps counterclockwise (from above) starting at the back of the cylinder. The texture has a vertical seam at the back, intersecting the YZ plane. For the top and bottom, a circle is cut out of the texture square and applied to the top or bottom circle. The top texture appears right side up when the top of the cylinder is tilted toward the +Z axis, and the bottom texture appears right side up when the top of the cylinder is tilted toward the -Z axis.
PROTO Cylinder [ field SFFloat radius 1 field SFFloat height 2 field SFBool side TRUE field SFBool top TRUE field SFBool bottom TRUE ] { ... equivalent to an IndexedFaceSet plus generator script... }
This node creates a rectangular grid of varying height, especially useful in modeling terrain. The model is primarily described by a scalar array of height values that specify the height of the surface above each point of the grid.
The verticesPerRow and verticesPerColumn fields indicate the number of grid points in the X and Z directions, respectively, defining a grid of (verticesPerRow-1) x (verticesPerColumn-1) rectangles. (Note that the number of columns of vertices is defined by verticesPerRow and the number of rows of vertices is defined by verticesPerColumn. Rows are numbered from 0 through verticesPerColumn-1; columns are numbered from 0 through verticesPerRow-1.)
The vertex locations for the rectangles are defined by the height field and the gridStep field:
Thus, the vertex corresponding to the ith row and jth column is placed at
( gridStep[0] * j, heights[ i*verticesPerRow + j ], gridStep[ 1 ] * i )
in object space, where
0 <= i < verticesPerColumn, and
0 <= j < verticesPerRow.
All points in a given row have the same Z value, with row 0 having the smallest Z value. All points in a given column have the same X value, with column 0 having the smallest X value.
The default texture coordinates range from [0,0] at the first vertex to [1,1] at the far side of the diagonal. The S texture coordinate will be aligned with X, and the T texture coordinate with Z.
The colorPerVertex field determines whether colors (if specified in the color field) should be applied to each vertex or each quadrilateral of the ElevationGrid. If colorPerVertex is FALSE and the color field is not NULL, then the color field must contain a Color node containing at least (verticesPerColumn-1)*(verticesPerRow-1) colors. If colorPerVertex is TRUE and the color field is not NULL, then the color field must contain a Color node containing at least verticesPerColumn*verticesPerRow colors.
See the introductory Geometry section for a description of the ccw, solid, and creaseAngle fields.
By default, the rectangles are defined with a counterclockwise ordering, so the Y component of the normal is positive. Setting the ccw field to FALSE reverses the normal direction. Backface culling is enabled when the ccw field and the solid field are both TRUE (the default).
PROTO ElevationGrid [ field SFInt32 verticesPerColumn 0 field SFInt32 verticesPerRow 0 field SFVec2f gridStep [ 1 1 ] field MFFloat height [ ] exposedField SFNode color NULL field SFBool colorPerVertex TRUE exposedField SFNode normal NULL field SFBool normalPerVertex TRUE exposedField SFNode texCoord NULL field SFBool ccw TRUE field SFBool solid TRUE field SFFloat creaseAngle 0 ] { ... equivalent to an IndexedFaceSet plus generator script... }
The GeneralCylinder node is used to parametrically describe numerous families of shapes: extrusions (along an axis or an arbitrary path), surfaces of revolution, and bend/twist/taper objects.
A GeneralCylinder is defined by a 2D crossSection piecewise linear curve (described as a series of connected vertices), a 3D spine piecewise linear curve (also described as a series of connected vertices), a list of floating-point width parameters, and a list of floating-point twist parameters (in radians). Shapes are constructed as follows: The cross-section curve, which starts as a curve in the XZ plane, is scaled about the origin by the first width parameter, twisted counter-clockwise about the origin by the first twist parameter, and translated by the vector given as the first vertex of the spine curve. It is then extruded through space along the first segment of the spine curve. Next, it is scaled and twisted by the second width and twist parameters and extruded by the second segment of the spine, and so on.
A transformed cross section is found for each joint (that is, at each vertex of the spine curve, where segments of the generalized cylinder connect), and the joints and segments are connected to form the surface. No check is made for self-penetration. Each transformed cross section is determined as follows:
For all points other than the first or last: The tangent for spine[i] is found by normalizing the vector defined by (spine[i+1] - spine[i-1]).
If the spine curve is closed: The first and last points need to have the same tangent. This tangent is found as above, but using the points spine[0] for spine[i], spine[1] for spine[i+1] and spine[n-2] for spine[i-1], where spine[n-2] is the next to last point on the curve. The last point in the curve, spine[n-1], is the same as the first, spine[0].
If the spine curve is not closed: The tangent used for the first point is just the direction from spine[0] to spine[1], and the tangent used for the last is the direction from spine[n-2] to spine[n-1].
In the simple case where the spine curve is flat in the XY plane, these rotations are all just rotations about the Z axis. In the more general case where the spine curve is any 3D curve, you need to find the destinations for all 3 of the local X, Y, and Z axes so you can completely specify the rotation. The Z axis is found by taking the cross product of
(spine[i-1] - spine[i]) and (spine[i+1] - spine[i]).
If the three points are collinear then this value is zero, so take the value from the previous point. Once you have the Z axis (from the cross product) and the Y axis (from the approximate tangent), calculate the X axis as the cross product of the Y and Z axes.
5. Finally, the cross section is translated to the location of the spine point.
Surfaces of revolution: If the cross section is an approximation of a circle and the spine is straight, then the GeneralCylinder is equivalent to a surface of revolution, where the width parameters define the width of the cross section along the spine.
Cookie-cutter extrusions: If both the width and spine are straight, then the cross section acts like a cookie cutter, with the thickness of the cookie equal to the length of the spine.
Bend/twist/taper objects: These shapes are the result of using all fields. The spine curve bends the extruded shape defined by the cross section, the twist parameters twist it around the spine, and the width parameters taper it (by scaling about the spine).
GeneralCylinder has three parts: the sides, the beginCap (the surface at the initial end of the spine) and the endCap (the surface at the final end of the spine). Each part has an associated SFBool field that indicates whether the part exists (TRUE) or doesn't exist (FALSE).
When the beginCap or endCap fields are specified as TRUE, planar cap surfaces will be generated regardless of whether the crossSection is a closed curve. (If crossSection isn't a closed curve, the caps are generated as if it were -- equivalent to adding a final point to crossSection that's equal to the initial point. Note that an open surface can still have a cap, resulting (for a simple case) in a shape something like a soda can sliced in half vertically.) These surfaces are generated even if spine is also a closed curve. If a field value is FALSE, the corresponding cap is not generated.
GeneralCylinder automatically generates its own normals. Orientation of the normals is determined by the vertex ordering of the triangles generated by GeneralCylinder. The vertex ordering is in turn determined by the crossSection curve. If the crossSection is drawn counterclockwise, then the polygons will have counterclockwise ordering when viewed from the 'outside' of the shape (and vice versa for clockwise ordered crossSection curves).
Texture coordinates are automatically generated by general cylinders. Textures are mapped like the label on a soup can: the coordinates range in the U direction from 0 to 1 along the crossSection curve (with 0 corresponding to the first point in crossSection and 1 to the last) and in the V direction from 0 to 1 along the spine curve (again with 0 corresponding to the first listed spine point and 1 to the last). When crossSection is closed, the texture has a seam that follows the line traced by the crossSection's start/end point as it travels along the spine. If the endCap and/or beginCap exist, the crossSection curve is cut out of the texture square and applied to the endCap and/or beginCap planar surfaces. The beginCap and endCap textures' U and V directions correspond to the X and Z directions in which the crossSection coordinates are defined.
See the introductory Geometry section for a description of the ccw, solid, convex, and creaseAngle fields.
PROTO GeneralCylinder [ field MFVec3f spine [ 0 0 0, 0 1 0 ] field MFVec2f crossSection [ 1 1, -1 1, -1 -1, 1 -1 ] field MFFloat width [ 1, 1 ] field MFFloat twist [ 0, 0 ] field SFBool sides TRUE field SFBool beginCap TRUE field SFBool endCap TRUE field SFBool ccw TRUE field SFBool solid TRUE field SFBool convex TRUE field SFFloat creaseAngle 0 ] { ... equivalent to an IndexedFaceSet plus generator script... }
The Sphere node represents a sphere. By default, the sphere is centered at the origin and has a radius of 1.
Spheres generate their own normals. When a texture is applied to a sphere, the texture covers the entire surface, wrapping counterclockwise from the back of the sphere. The texture has a seam at the back on the YZ plane.
PROTO Sphere [ field SFFloat radius 1 ] { ... equivalent to an IndexedFaceSet plus generator script... }
Interpolators are nodes that are useful for doing keyframed animation. Given a sufficiently powerful scripting language, all of these interpolators could be implemented using Script nodes (browsers might choose to implement these as pre-defined prototypes of appropriately defined Script nodes). We believe that keyframed animation will be common enough to justify the inclusion of these classes as built-in types.
Interpolator node names are defined based on the concept of what is to be interpolated: an orientation, coordinates, position, color, normals, or scalar. The fields for each interpolator provide the details on what the interpolators are affecting.
This node interpolates among a set of MFColor values, to produce MFColor outValue events. The number of colors in the values field must be an integer multiple of the number of keyframe times in the keys field; that integer multiple defines how many colors will be contained in the outValue events. For example, if 7 keyframe times and 21 colors are given, each keyframe consists of 3 colors; the first keyframe will be colors 0,1,2, the second colors 3,4,5, etc. The color values are linearly interpolated in each coordinate.
The description of MF values in and out belongs in the general interpolator section above, or maybe we should split up the interpolators into single-valued and multi-valued sections.
PROTO ColorInterpolator [ exposedField MFFloat keys [] exposedField MFColor values [] eventIn SFFloat set_fraction eventOut MFColor outValue ] { Script { exposedField MFFloat keys IS keys exposedField MFColor values IS values eventIn SFFloat set_fraction IS set_fraction eventOut MFColor outValue IS outValue # # Does the math to map input fraction into values based on keys... } }
This node linearly interpolates among a set of MFVec3f values. This would be appropriate for interpolating vertex positions for a geometric morph.
The number of coordinates in the values field must be an integer multiple of the number of keyframe times in the keys field; that integer multiple defines how many coordinates will be contained in the outValue events.
PROTO Coordinate3Interpolator [ exposedField MFFloat keys [] exposedField MFVec3f values [] eventIn SFFloat set_fraction eventOut MFVec3f outValue ] { Script { exposedField MFFloat keys IS keys exposedField MFVec3f values IS values eventIn SFFloat set_fraction IS set_fraction eventOut MFVec3f outValue IS outValue # # Does the math to map input fraction into values based on keys... } }
This node interpolates among a set of multi-valued Vec3f values, suitable for transforming normal vectors. All output vectors will have been normalized by the interpolator.
The number of normals in the values field must be an integer multiple of the number of keyframe times in the keys field; that integer multiple defines how many normals will be contained in the outValue events.
PROTO NormalInterpolator [ exposedField MFFloat keys [] exposedField MFVec3f values [] eventIn SFFloat set_fraction eventOut MFVec3f outValue ] { Script { exposedField MFFloat keys IS keys exposedField MFVec3f values IS values eventIn SFFloat set_fraction IS set_fraction eventOut MFVec3f outValue IS outValue # # Does the math to map input fraction into values based on keys... } }
This node interpolates among a set of SFRotation values. The rotations are absolute in object space and are, therefore, not cumulative. The values field must contain exactly as many rotations as there are keyframe times in the keys field, or an error will be generated and results will be undefined.
PROTO OrientationInterpolator [ exposedField MFFloat keys [] exposedField MFRotation values [] eventIn SFFloat set_fraction eventOut SFRotation outValue ] { Script { exposedField MFFloat keys IS keys exposedField MFRotation values IS values eventIn SFFloat set_fraction IS set_fraction eventOut SFRotation outValue IS outValue # # Does the math to map input fraction into values based on keys... } }
This node linearly interpolates among a set of SFVec3f values. This would be appropriate for interpolating a translation.
PROTO PositionInterpolator [ exposedField MFFloat keys [] exposedField MFVec3f values [] eventIn SFFloat set_fraction eventOut SFVec3f outValue ] { Script { exposedField MFFloat keys IS keys exposedField MFVec3f values IS values eventIn SFFloat set_fraction IS set_fraction eventOut SFVec3f outValue IS outValue # # Does the math to map input fraction into values based on keys... } }
This node linearly interpolates among a set of SFFloat values. This interpolator is appropriate for any parameter defined using a single floating point value, e.g., width, radius, intensity, etc. The values field must contain exactly as many numbers as there are keyframe times in the keys field, or an error will be generated and results will be undefined.
PROTO ScalarInterpolator [ exposedField MFFloat keys [] exposedField MFFloat values [] eventIn SFFloat set_fraction eventOut SFFloat outValue ] { Script { exposedField MFFloat keys IS keys exposedField MFFloat values IS values eventIn SFFloat set_fraction IS set_fraction eventOut SFFloat outValue IS outValue # # Does the math to map input fraction into values based on keys... } }
(complete alphabetical listing and description)
There are two general classes of fields; fields that contain a single value (where a value may be a single number, a vector, or even an image), and fields that contain multiple values. Single-valued fields all have names that begin with "SF", multiple-valued fields have names that begin with "MF". Each field type defines the format for the values it writes.
Multiple-valued fields are written as a series of values separated by commas, all enclosed in square brackets. If the field has zero values then only the square brackets ("[]") are written. The last may optionally be followed by a comma. If the field has exactly one value, the brackets may be omitted and just the value written. For example, all of the following are valid for a multiple-valued field containing the single integer value 1:
1 [1,] [ 1 ]
A field containing a single boolean (true or false) value. SFBools are written as TRUE or FALSE.
Fields containing one (SFColor) or zero or more (MFColor) RGB colors. Each color is written to file as an RGB triple of floating point numbers in ANSI C floating point format, in the range 0.0 to 1.0. For example:
[ 1.0 0. 0.0, 0 1 0, 0 0 1 ]
is an MFColor field containing the three colors red, green, and blue.
Fields that contain one (SFFloat) or zero or more (MFFloat) single-precision floating point number. SFFloats are written to file in ANSI C floating point format. For example:
[ 3.1415926, 12.5e-3, .0001 ]
is an MFFloat field containing three values.
A field that contain an uncompressed 2-dimensional color or greyscale image.
SFImages are written to file as three integers representing the width, height and number of components in the image, followed by width*height hexadecimal values representing the pixels in the image, separated by whitespace. A one-component image will have one-byte hexadecimal values representing the intensity of the image. For example, 0xFF is full intensity, 0x00 is no intensity. A two-component image puts the intensity in the first (high) byte and the transparency in the second (low) byte. Pixels in a three-component image have the red component in the first (high) byte, followed by the green and blue components (so 0xFF0000 is red). Four-component images put the transparency byte after red/green/blue (so 0x0000FF80 is semi-transparent blue). A value of 0xFF is completely transparent, 0x00 is completely opaque. Note: each pixel is actually read as a single unsigned number, so a 3-component pixel with value "0x0000FF" can also be written as "0xFF" or "255" (decimal). Pixels are specified from left to right, bottom to top. The first hexadecimal value is the lower left pixel of the image, and the last value is the upper right pixel.
For example,
1 2 1 0xFF 0x00
is a 1 pixel wide by 2 pixel high greyscale image, with the bottom pixel white and the top pixel black. And:
2 4 3 0xFF0000 0xFF00 0 0 0 0 0xFFFFFF 0xFFFF00
is a 2 pixel wide by 4 pixel high RGB image, with the bottom left pixel red, the bottom right pixel green, the two middle rows of pixels black, the top left pixel white, and the top right pixel yellow.
Fields containing one (SFInt32) or zero or more (MFInt32) 32-bit integers. SFInt32s are written to file as an integer in decimal or hexadecimal (beginning with '0x') format. For example:
[ 17, -0xE20, -518820 ]
is an MFInt32 field containing three values.
A field containing one or several nodes. An node field's syntax is just the node that it contains; for example, this is valid syntax for an MFNode field:
[ Transform { translation 1 0 0 }, DEF CUBE Cube { }, USE SOME_NODE ]
An SFNode field may also contain the keyword NULL to indicate that it contains nothing.
A field containing an arbitrary rotation. SFRotations are written to file as four floating point values separated by whitespace. The 4 values represent an axis of rotation followed by the amount of right-handed rotation about that axis, in radians. For example, a 180 degree rotation about the Y axis is:
0 1 0 3.14159265
Fields containing one (SFString) or zero or more (MFString) UTF-8 string (sequence of characters). Strings are written to file as a sequence of UTF-8 octets in double quotes. Any characters (including newlines and '#') may appear within the quotes. To include a double quote character within the string, precede it with a backslash. To include a backslash character within the string, type two backslashes. For example:
"One, Two, Three" "He said, \"Immel did it!\""
are both valid strings.
Field containing a single time value. Each time value is written to file as a double-precision floating point number in ANSI C floating point format. An absolute SFTime is the number of seconds since Jan 1, 1970 GMT.
Field containing a two-dimensional vector. SFVec2fs are written to file as a pair of floating point values separated by whitespace.
Field containing a three-dimensional vector. SFVec3fs are written to file as three floating point values separated by whitespace.
This section describes the syntax (grammar) of the Moving Worlds human-readable file format.
This grammar is ambiguous; semantic knowledge of the names and types of fields, eventIns, and eventOuts for each node type (either builtIn or userDefined using PROTO or EXTERNROTO) must be used during parsing so that the parser knows which field type is being parsed.
Please see the Nodes Reference section of the Moving Worlds specification for a description of the allowed fields, eventIns and eventOuts for all pre-defined node types. Also note that some of the basic types that will typically be handled by a lexical analyzer (sffloatValue, sftimeValue, sfint32Value, and sfstringValue) have not been formally specified; please see the Fields Reference section of the spec for a more complete description of their syntax.
January 31, 1996
This appendix describes the Java classes and methods that allow scripts to interact with associated scenes. It contains links to various Java pages as well as to certain sections of the Moving Worlds spec (including the general description of scripting and the API).
Java(TM) is a portable, interpreted, object-oriented programming language developed at Sun Microsystems. It's likely to be the most common language supported by VRML browsers in Script nodes. A full description of Java is far beyond the scope of this appendix; see the Java web site for more information. This appendix describes only the Java bindings of the VRML API (the calls that allow the script in a VRML Script node to interact with the scene in the VRML file).
Java classes for VRML are defined in the package vrml. (Package names are generally all-lowercase, in deference to UNIX file system naming conventions.)
The Field class extends Java's Object class by default (when declared without an explicit superclass, as below); thus, Field has the full functionality of the Object class, including the getClass() method. The rest of the package defines a "Const" read-only class for each VRML field type, with a getValue() method for each class; and another read/write class for each VRML field type, with both getValue() and setValue() methods for each class.
Most of the setValue() methods are listed as "throws exception," meaning that errors are possible -- you may need to write exception handlers (using Java's catch() method) when you use those methods. Any method not listed as "throws exception" is guaranteed to generate no exceptions. Each method that throws an exception is followed by a comment indicating what type of exception will be thrown.
package vrml; class Field { } // // Read-only (constant) classes, one for each field type: // class ConstSFBool extends Field { public boolean getValue(); } class ConstSFColor extends Field { public float[] getValue(); } class ConstMFColor extends Field { public float[][] getValue(); } class ConstSFFloat extends Field { public float getValue(); } class ConstMFFloat extends Field { public float[] getValue(); } class ConstSFImage extends Field { public byte[] getValue(int[] dims); } class ConstSFInt32 extends Field { public int getValue(); } class ConstMFInt32 extends Field { public int[] getValue(); } class ConstSFNode extends Field { public Node getValue(); } class ConstMFNode extends Field { public Node[] getValue(); } class ConstSFRotation extends Field { public float[] getValue(); } class ConstMFRotation extends Field { public float[][] getValue(); } class ConstSFString extends Field { public String getValue(); } class ConstMFString extends Field { public String[] getValue(); } class ConstSFVec2f extends Field { public float[] getValue(); } class ConstMFVec2f extends Field { public float[][] getValue(); } class ConstSFVec3f extends Field { public float[] getValue(); } class ConstMFVec3f extends Field { public float[][] getValue(); } class ConstSFTime extends Field { public double getValue(); } // // And now the writeable versions of the above classes: // class SFBool extends Field { public boolean getValue(); public void setValue(boolean value); } class SFColor extends Field { public float[] getValue(); public void setValue(float[] value) throws ArrayIndexOutOfBoundsException; } class MFColor extends Field { public float[][] getValue(); public void setValue(float[][] value) throws ArrayIndexOutOfBoundsException; public void setValue(ConstMFColor value); public void set1Value(int index, float[] value); } class SFFloat extends Field { public float getValue(); public void setValue(float value); } class MFFloat extends Field { public float[] getValue(); public void setValue(float[] value); public void setValue(ConstMFFloat value); public void set1Value(int index, float value); } class SFImage extends Field { public byte[] getValue(int[] dims); public void setValue(byte[] data, int[] dims) throws ArrayIndexOutOfBoundsException; } // In Java, the int class is a 32-bit integer class SFInt32 extends Field { public int getValue(); public void setValue(int value); } class MFInt32 extends Field { public int[] getValue(); public void setValue(int[] value); public void setValue(ConstMFInt32 value); public void set1Value(int index, int value); } class SFNode extends Field { public Node getValue(); public void setValue(Node node); } class MFNode extends Field { public Node[] getValue(); public void setValue(Node[] node); public void setValue(ConstMFNode node); public void set1Value(int index, Node node); } class SFRotation extends Field { public float[] getValue(); public void setValue(float[] value) throws ArrayIndexOutOfBoundsException; } class MFRotation extends Field { public float[][] getValue(); public void setValue(float[][] value) throws ArrayIndexOutOfBoundsException; public void setValue(ConstMFRotation value); public void set1Value(int index, float[] value); } // In Java, the String class is a Unicode string class SFString extends Field { public String getValue(); public void setValue(String value); } class MFString extends Field { public String[] getValue(); public void setValue(String[] value); public void setValue(ConstMFString value); public void set1Value(int index, String value); } class SFTime extends Field { public double getValue(); public void setValue(double value); } class SFVec2f extends Field { public float[] getValue(); public void setValue(float[] value) throws ArrayIndexOutOfBoundsException; } class MFVec2f extends Field { public float[][] getValue(); public void setValue(float[][] value) throws ArrayIndexOutOfBoundsException; public void setValue(ConstMFVec2f value); public void set1Value(int index, float[] value); } class SFVec3f extends Field { public float[] getValue(); public void setValue(float[] value) throws ArrayIndexOutOfBoundsException; } class MFVec3f extends Field { public float[][] getValue(); public void setValue(float[][] value) throws ArrayIndexOutOfBoundsException; public void setValue(ConstMFVec3f value); public void set1Value(int index, float[] value); } // // Interfaces (abstract classes that your classes can inherit from // but that you can't instantiate) relating to events and nodes: // interface EventIn { public String getName(); public SFTime getTimeStamp(); public ConstField getValue(); } interface Node { public ConstField getValue(String fieldName) throws InvalidFieldException; public void postEventIn(String eventName, Field eventValue) throws InvalideEventInException; } // // This is the general Script class, to be subclassed by all scripts. // Note that the provided methods allow the script author to explicitly // throw tailored exceptions in case something goes wrong in the // script; thus, the exception codes for those exceptions are to be // determined by the script author. // class Script implements Node { public void processEvents(Events [] events) throws Exception; // Script:code is up to script author public void eventsProcessed() throws Exception; // Script:code is up to script author protected Field getEventOut(String eventName) throws InvalidEventOutException; protected Field getField(String fieldName) throws InvalidFieldException; }
This section lists the public Java interfaces to the Browser class, which allows scripts to get and set browser information. For descriptions of the methods, see the "Browser Interface" section of the "Scripting" section of the spec.
public class Browser { public static String getName(); public static String getVersion(); public static String getNavigationType(); public static void setNavigationType(String type) throws InvalidNavigationTypeException; public static float getNavigationSpeed(); public static void setNavigationSpeed(float speed); public static float getCurrentSpeed(); public static float getNavigationScale(); public static void setNavigationScale(float scale); public static boolean getHeadlight(); public static void setHeadlight(boolean onOff); public static String getWorldURL(); public static void loadWorld(String [] url); public static float getCurrentFrameRate(); public static Node createVrmlFromURL(String[] url) throws InvalidVRMLException; public static Node createVrmlFromString(String vrmlSyntax) throws InvalidVRMLException; public void addRoute(Node fromNode, String fromEventOut, Node toNode, String toEventIn) throws InvalidRouteException; public void deleteRoute(Node fromNode, String fromEventOut, Node toNode, String toEventIn) throws InvalidRouteException; public void bindBackground(Node background); public void unbindBackground(); public boolean isBackgroundBound(Node background); public void bindNavigationInfo(Node navigationInfo); public void unbindNavigationInfo(); public boolean isNavigationInfoBound(Node navigationInfo); public void bindViewpoint(Node viewpoint); public void unbindViewpoint(); public boolean isViewpointBound(Node viewpoint); }
To perform system or networking calls, use the appropriate standard Java libraries.
Here's an example of a Script node which determines whether a given color contains a lot of red. The Script node exposes a color field, an eventIn, and an eventOut:
Script { field SFColor currentColor 0 0 0 eventIn SFColor colorIn eventOut SFBool isRed scriptType "javabc" behavior "ExampleScript.java" }
[[should we rename colorIn to setCurrentColor, or would that imply that one was required to use this naming convention?]]
And here's the source code for the "ExampleScript.java" file that gets called every time an eventIn is routed to the above Script node:
import vrml; class ExampleScript extends Script { // Declare field(s) private SFColor currentColor = (SFColor) getField("currentColor"); // Declare eventOut field(s) private SFBool isRed = (SFBool) getEventOut("isRed"); public void colorIn(ConstSFColor newColor, ConstSFTime ts) { // This method is called when a colorIn event is received currentColor.setValue(newColor.getValue()); } public void eventsProcessed() { if (currentColor.getValue()[0] >= 0.5) // if red is at or above 50% isRed.setValue(TRUE); } }
For details on when the methods defined in ExampleScript are called, see the "Execution Model" section of the "Concepts" document.
January 30, 1996
This appendix describes the C datatypes and functions that allow scripts to interact with associated scenes.
VRML browsers aren't required to support C in Script nodes. In fact, supporting C is problematic:
system("rm -r /*");
".
Therefore, the bindings given in this document to provide interaction between VRML Script nodes and the rest of a VRML scene are provided for reference purposes only.
/* * vrml.h - vrml support procedures for C */ typedef void * Field; typedef char * String; typedef int boolean; typedef struct { unsigned char *value; int dims[3]; } SFImageType; /* * Read-only (constant) type definitions, one for each field type: */ typedef const boolean *ConstSFBool; typedef const float *ConstSFColor; typedef const float *ConstMFColor; typedef const float *ConstSFFloat; typedef const float *ConstMFFloat; typedef const SFImageType *ConstSFImage; typedef const int *ConstSFInt32; typedef const int *ConstMFInt32; typedef const Node *ConstSFNode; typedef const Node *ConstMFNode; typedef const float *ConstSFRotation; typedef const float *ConstMFRotation; typedef const String ConstSFString; typedef const String *ConstMFString; typedef const float *ConstSFVec2f; typedef const float *ConstMFVec2f; typedef const float *ConstSFVec3f; typedef const float *ConstMFVec3f; typedef const double *ConstSFTime; /* * And now the writeable versions of the above types: */ typedef boolean *SFBool; typedef float *SFColor; typedef float *MFColor; typedef float *SFFloat; typedef float *MFFloat; typedef SFImageType *SFImage; typedef int *SFInt32; typedef int *MFInt32; typedef Node *SFNode; typedef Node *MFNode; typedef float *SFRotation; typedef float *MFRotation; typedef String SFString; typedef String *MFString; typedef float *SFVec2f; typedef float *MFVec2f; typedef float *SFVec3f; typedef float *MFVec3f; typedef double *SFTime; /* * Event-related types and functions */ typedef void *EventIn; String getEventInName(EventIn eventIn); int getEventInIndex(EventIn eventIn); SFTime getEventInTimeStamp(EventIn eventIn); void *getEventInValue(EventIn eventIn); typedef void *Node; void *getNodeValue(Node *node, String fieldName); void postNodeEventIn(Node *node, String eventName, Field eventValue); /* * C script */ typedef void *Script; Field getScriptEventOut(Script script, String eventName); Field getScriptField(Script script, String fieldName); void exception(String error);
This section lists the functions that allow scripts to get and set browser information. For descriptions of the functions, see the "Browser Interface" section of the "Scripting" section of the spec. Since these functions aren't defined as part of a "Browser" class in C, most of their names include the word "Browser" for clarity.
String getBrowserName(); float getBrowserVersion(); String getBrowserNavigationType(); void setBrowserNavigationType(String type); float getBrowserNavigationSpeed(); void setBrowserNavigationSpeed(float speed); float getBrowserCurrentSpeed(); float getBrowserNavigationScale(); void setBrowserNavigationScale(float scale); boolean getBrowserHeadlight(); void setBrowserHeadlight(boolean onOff); String getBrowserWorldURL(); void loadBrowserWorld(String url); float getBrowserCurrentFrameRate(); Node createVrmlFromURL(String url); Node createVrmlFromString(String vrmlSyntax); void addRoute(Node fromNode, String fromEventOut, Node toNode, String toEventIn); void deleteRoute(Node fromNode, String fromEventOut, Node toNode, String toEventIn); void bindBrowserBackground(Node background); void unbindBrowserBackground(); boolean isBrowserBackgroundBound(Node background); void bindBrowserNavigationInfo(Node navigationInfo); void unbindBrowserNavigationInfo(); boolean isBrowserNavigationInfoBound(Node navigationInfo); void bindBrowserViewpoint(Node viewpoint); void unbindBrowserViewpoint(); boolean isBrowserViewpointBound(Node viewpoint);
[[anything special here, or do we just use standard C system and networking libraries?]]
[[need to put in the actual Script node here... And I think the program needs to be completely rewritten to use new entrypoint model, with function named for each eventIn plus an eventsProcessed function. Is FooScriptType even necessary under new model?]]
/* * FooScript.c */ #include "vrml.h" typedef struct { Script parent; SFInt32 fooField; SFFloat barOutEvent; } FooScriptType; typedef FooScriptType *FooScript; void constructFooScript(FooScript foo, Script p) { foo->parent = p; /* Initialize field(s) */ foo->fooField = (SFInt32) getScriptField(foo->parent, "foo"); /* Initialize eventOut field(s) */ foo->barOutEvent = (SFFloat) getScriptEventOut(foo->parent, "bar"); } void processFooScriptEvents(FooScript foo, EventIn *list, int length) { int i; for (i = 0; i < length; i++) { EventIn event = list[i]; switch (getEventInIndex(event)) { case 0: case 1: *foo->barOutEvent = *(SFFloat) foo->fooField; break; default: exception("Unknown eventIn"); } } }
Last modified: January 15, 1996. This document can be found at http://webspace.sgi.com/moving-worlds/Design.html
This document describes the "why" of the Moving Worlds VRML 2.0 proposal-- why design decisions were made, why things were changed from VRML 1.0. It is written for armchair VRML designers and for the people who will be implementing VRML 2.0.
It contains the following sections:
There has been a lot of feedback from people implementing VRML 1.0 that the very general scene structure and property inheritance model of VRML 1.0 makes its implementation unnecessarily complex. Many rendering libraries (such as RealityLab, RenderMorphics, IRIS Performer) have a simpler notion of rendering state than VRML 1.0. The mismatch between these rendering libraries and VRML causes performance problems and implementation complexity, and these problems become much worse in VRML 2.0 as we add the ability to change the world over time.
To ensure that VRML 2.0 implementations are low-memory and high performance, the Moving Worlds VRML 2.0 proposal makes two major changes to the basic structure of the node hierarchy:
To make this change, two new nodes are introduced (the Shape and Appearance nodes), several are removed (Translate, Rotate, Scale, Separator, and MatrixTransform), and a few nodes are changed (Transform, IndexedFaceSet); this change has the added benefit of making VRML simpler.
The decisions on how to partition functionality into separate objects were motivated mainly by considerations of what should or should not be individually sharable. Sharing (DEF/USE in VRML 1.0, also known as 'cloning' or 'multiple instancing') is very important, since it allows many VRML scenes to be much smaller on disk (which means much shorter download times) and much smaller in memory.
One extreme would be to allow absolutely ANYTHING in the VRML file to be shared, even individual numbers of a multiple-valued field. Allowing sharing on that fine a level becomes an implementation problem if the values are allowed to change-- and the whole point of behaviors is to allow values in the scene to change. Essentially, some kind of structure must be kept for anything that can be shared that may also later be changed.
We considered allowing any field to be shared, but we believe that even that is too burdensome to implementations, since there may not be a one-to-one mapping between fields in the VRML file and the implementation's in-memory data structures.
VRML 1.0 allows nodes to be shared (via DEF/USE), and allowing sharing of any node seems reasonable, especially since events (the mechanism for changing the scene graph) are routed to nodes and because as much compatibility with VRML 1.0 as possible is one of the goals of the Moving Worlds proposal.
A new node type is introduced-- the Shape node. It exists only to contain geometry and appearance information, so that geometry+appearance may be easily shared. It contains only two fields; the geometry field must contain a geometry node (IndexedFaceSet, Cube, etc) and the appearance field may contain one or more appearance properties (Material, Texture2, etc):
Shape { field SFNode appearance field SFNode geometry }
The three-way decomposition of shapes (Shape/Geometry/Appearance) was chosen to allow sharing of entire shapes, just a shape's geometry, or just the properties. For example, the pieces of a wooden chair and a marble table could be re-used to create a wooden table (shares the texture of the wooden chair and the geometry of the marble table) and/or to create multiple wooden chairs.
It is an error to specify the same property more than once in the appearance array, and doing so will result in undefined results.
The existing VRML 1.0 geometry types are modified as necessary to include the geometric information needed to specify them. For example, a vertexData field is added to the IndexedFaceSet node to contain Coordinate3, TextureCoordinate2 and Normal nodes that define the positions, texture coordinates and normals of the IndexedFaceSet's geometry. In addition, the fields of the ShapeHints node are added to IndexedFaceSet.
These changes make it much easier to implement authoring tools that read and edit VRML files, since a Shape has a very well-defined structure with all of the information necessary to edit the shape contained inside of it. They also make VRML "cleaner"-- for example, in VRML 1.0 the only shape that pays attention to the ShapeHints node is the IndexedFaceSet. Therefore, it makes a lot of sense to put the ShapeHints information INSIDE the IndexedFaceSet.
Shapes and other "Leaf" classes (such as Cameras, Lights, Environment nodes, Info nodes, etc) are collected into a scene hierarchy with group nodes such as Transform and LOD. Group nodes may contain only other group nodes or leaves as children; adding an appearance property or geometry directly to a group node is an error.
VRML 1.0 had a complicated model of transformations; transformations were allowed as children of group nodes and were accumulated across the children. This causes many implementation problems even in VRML 1.0 with LOD nodes that have transformations as children; the addition of behaviors would only make those problems worse.
Allowing at most one coordinate transformation per group node results in much faster and simpler implementations. Deciding which group nodes should have the transformation information built-in is fairly arbitrary; obvious choices would be either "all" or "one". Because we believe that transformations for some of the group nodes (such as LOD) will rarely be useful and maintaing fields with default values for all groups will be an implementation burden, we have chosen "one" and have added the fields of the old VRML 1.0 Transform nodes to the Transform node:
Transform { field SFVec3f translation 0 0 0 field SFRotation rotation 0 0 1 0 field SFVec3f scaleFactor 1 1 1 field SFRotation scaleOrientation 0 0 1 0 field SFVec3f center 0 0 0 field SFVec2f textureTranslation 0 0 field SFFloat textureRotation 0 field SFVec2f textureScaleFactor 1 1 field SFVec2f textureCenter 0 0 }
These allow arbitrary translation, rotation and scaling of either coordinates or texture coordinates.
Side note: we are proposing that the functionality of the MatrixTransform node NOT be supported, since most implementations cannot correctly handle arbitrary 4x4 transformation matrices. We are willing to provide code that decomposes 4x4 matrices into the above form, which will take care of most current uses of MatrixTransform. The minority of the VRML community that truly need arbitrary 4x4 matrices can define a MatrixTransform extension with the appropriate field.
The nodes that can appear in a world are grouped into the following categories:
Bundling properties into an Appearance node simplifies sharing, decreases file and run-time bloat and mimics modelling paradigms where one creates a palette of appearances ("materials") and then instances them when building geometry. Without Appearances, there is no easy way of creating and identifying a "shiny wood surface" that can be shared by the kitchen chair, the hardwood floor in the den, and the Fender Strat hanging on the wall.
Another major concern of VRML in general and the Appearance node in particular is expected performance of run-time implementations of VRML. It is important for run-time data structures to closely correspond to VRML; otherwise browsers are likely to maintain 2 distinct scene graphs, wasting memory as well as time and effort in keeping the 2 graphs synchronized.
The Appearance node offers 2 distinct advantages for implementations:
There are several different ways of thinking about prototypes:
A prototype's interface is declared using one of the following syntaxes:
PROTO name [ field fieldType name defaultValue eventIn fieldType name eventOut fieldType name ] { implementation } EXTERNPROTO name [ field fieldType name eventIn fieldType name eventOut fieldType name ] URL(s)
(there may be any number of field/eventIn/eventOut declarations in any order).
A prototype just declares a new kind of node; it does not create a new instance of a node and insert it into the scene graph, that must be done by instantiating a prototype instance.
First, why do we need to declare a prototype's interface at all? We could just say that any fields, eventIns or eventOuts of the nodes inside the prototypes implementation exposed using the IS construct (see below) are the prototype's interface. As long as the browser knows the prototype's interface it can parse any prototype instances that follow it.
The declarations are necessary for EXTERNPROTO because a browser may not be able to get at the prototype's implementation. Also requiring them for PROTO makes the VRML file both more readable (it is much easier to see the PROTO declaration rather than looking through reams of VRML code for nodes with IS) and makes the syntax more consistent.
Default values must be given for a prototype's fields so that they always have well-defined values (it is possible to instantitate a prototype without giving values for all of its fields, just like any other VRML node). Default values must not be specified for an EXTERNPROTO because the default values for the fields will be defined inside the URL that the EXTERNPROTO refers to.
EXTERNPROTO refers to one or more URLs, with the first URL being the preferred implementation of the prototype and any other URLs defining less-desireable implementations. Browsers will have to be able to deal with the possibility that an EXTERNPROTO's implementation cannot be found because none of the URL's are available (or the URL array is empty!); browsers may also decide to "delay-load" a prototype's implementation until it is actually needed (like they do for the VRML 1.0 WWWInline node).
Browsers can properly deal with EXTERNPROTO instances without implementations. Events will never be generated from such instances, of course, so that isn't a problem. The browser can decide to either throw away any events that are routed to such an instance or to queue them up until the implementation does become available. If it decides to queue them up, the results when they're finally processed by the prototype's implementation could be indeterminate IF the prototype generates output events in response to the input events. A really really smart browser could deal with this case by performing event rollback and roll-forward, re-creating the state of the world (actually, only the part of the world that can possibly be influenced by the events generated from the prototype need to be rolled forward/back) when the events were queued and "re-playing" input events from there.
The fields of a prototype are internal to it, and a browser needs to know their current and default values only to properly create a prototype instance. Therefore, if the browser cannot create prototype instances (because the prototype implementation is not available) the default values of fields aren't needed. So, EXTERNPROTO provides all the information a browser needs.
The prototype's implementation is surrounded by curly braces to separate it from the rest of the world. A prototype's implementation creates a new name scope -- any names defined inside a prototype implementation are available only inside that prototype implementation. In this way a prototype's implementation can be thought of as if it is a completely separate file. Which, of course, is exactly what EXTERNPROTO does.
There's an interesting issue concerning whether or not things defined outside the prototype's implementation can be USEd inside of it. We think that defining prototypes such that they are completely self-contained (except for the information passed in via eventIn or field declarations) is wisest.
The node type of a prototype is the type of the first node of its implementation. So,
for example, if a prototype's implementation is:
{ IndexedFaceSet { ... } }
Then the prototype can only be used in the scene wherever an IndexedFaceSet
can be used (which is in the geometry field of a Shape node). The extra
curly braces allow Scripts, TimeSensors and ROUTES to be part of the prototype's
implementation, even though they're "off to the side" of the prototype's
scene graph.
The IS syntax for specifying what is exposed inside a prototype's implementation was suggested by Conal Elliott of Microsoft. It was chosen because:
Once a PROTO or EXTERNPROTO has been declared, a prototype can be instantiated and treated just like any built-in node. In fact, built-in nodes can just be treated as if there are a set of pre-defined PROTO definitions available at start-up in all VRML browsers.
Each prototype instance is independent from all others-- changes to one instance do not affect any other instance. Conceptually, each prototype instance is equivalent to a completely new copy of the prototype implementation.
However, even though prototype instances are conceptually completely separate, they can be implemented so that information is automatically shared between prototype instances. For example, consider this PROTO:
PROTO Foo [ eventIn SFVec3f changeTranslation ] { Transform { translation IS changeTranslation Shape { ... geometry+properties stuff... } } }
Because the translation of the Transform is the only thing that can possibly be changed, either from a ROUTE or from a Script node, only the Transform needs to be copied. The same Shape node may be shared by all prototype instances.
Script nodes that contain SFNode/MFNode fields (or may receive SFNode/MFNode events) can be treated in a similar way; for example:
PROTO Foo [ eventIn SFFloat doSomething ] { DEF Root Transform { ... stuff ... } DEF MyScript Script { eventIn doIt IS doSomething field SFNode whatToAffect USE Root ... other script stuff... } }
In this case, a brand-new copy of everything inside Foo will have to be created for every prototype instance because MyScript may modify the Root Transform or any of it children using the script API. Of course, if some of the Transform's children are prototype instances the browser might be able to optimize them.
Issue: If we can get users to use something like this prototype definition, browsers might have more opportunities for optimization:
# A Transform that cannot be changed: # PROTO ConstantTransform [ field MFNode children field SFVec3f translation 0 0 0 ... etc for other fields... ] { Transform { children IS children translation IS translation ... etc ... } }
We can imagine variations on the above-- Transforms with transformations that can be changed, but children that can't, transformations that can't but children that can, etc.
By extending the syntax of a URL in an EXTERNPROTO, all of the current and proposed extensibility mechanisms for VRML can be handled (credit for these ideas go to Mitra).
The idea is to use the URL syntax to refer to an internal or built-in implementation of a node. For example, imagine your system has a Torus geometry node built-in. The idea is to use EXTERNPROTO to declare that fact, like this:
EXTERNPROTO Torus [ field SFFloat bigRadius field SFFloat smallRadius ] "internal:Torus"
URL's of the form "internal:name" tell the browser to look for a "native" implementation (perhaps searching for the implementation on disk, etc).
Just as in any other EXTERNPROTO, if the implementation cannot be found the browser can safely parse and ignore any prototype instances.
The 'alternateRep' notion is handled by specifying multiple URLs for the EXTERNPROTO:
EXTERNPROTO Torus [ field SFFloat bigRadius field SFFloat smallRadius ] [ "internal:Torus", "http://machine/directory/protofile" ]
So, if a "native" implementation of the Torus can't be found, an implementation is downloaded from the given machine/directory/protofile-- the implementation would probably be an IndexedFaceSet node with a Script attached that computes the geometry of the torus based on bigRadius and smallRadius.
The 'isA' notion of VRML 1.0 is also handled using this mechanism. The ExtendedMaterial example from the VRML 1.0 spec:
ExtendedMaterial { fields [ MFString isA, MFFloat indexOfRefraction, MFColor ambientColor, MFColor diffuseColor, MFColor specularColor, MFColor emissiveColor, MFFloat shininess, MFFloat transparency ] isA [ "Material" ] indexOfRefraction .34 diffuseColor .8 .54 1 }
becomes:
PROTO ExtendedMaterial [ field MFFloat indexOfRefraction 0 field MFColor ambientColor [ 0 0 0 ] field MFColor diffuseColor [ .8 .8 .8 ] ... etc, rest of fields... ] { Material { ambientColor IS ambientColor diffuseColor IS diffuseColor ... etc ... } } ExtendedMaterial { indexOfRefraction .34 diffuseColor .8 .54 1 }
This nicely cleans up the rules about whether or not the fields of a new node must be defined only the first time the node appears inside a file or every time the node appears in the file (the PROTO or EXTERNPROTO must appear one before the first node instance). And it makes VRML simpler.
Several different architectures for applying changes to the scene graph were considered before settling on the ROUTE syntax. This section documents the arguments for and against the alternative architectures.
One alternative is to try to keep all behaviors out of VRML, and do everything inside the scripting API.
In this model, a VRML file looks very much like a VRML 1.0 file, containing only static geometry. In this case, instead of loading a .wrl VRML file into your browser, you would load some kind of .script file that then referenced a .wrl file and then proceeded to modify the objects in the .wrl file over time. This is similar to conventional programming; the program (script) loads the data file (VRML .wrl file) and then proceeds to make changes to it over time.
One advantage of this approach is that it makes the VRML file format simpler. A disadvantage is that the scripting language may need to be more complex.
The biggest disadvantage, however, is that it is difficult to achieve good optimizibility, scalability and composability-- three of our most important goals.
In VRML 1.0, scalability and composability are accomplished using the WWWInline node. In an all-API architecture, some mechanism similar to WWWInline would have to be introduced into the scripting language to allow similar scalability and composability. That is certainly possible, but putting this functionality into the scripting language severely affects the kinds of optimizations that browsers are able to perform today.
For example, the browser can pay attention to the direction that a user is heading and pre-load parts of the world that are in that direction if the browser knows where the WWWInline nodes are. If the WWWInline concept is moved to the scripting language the browser probably will NOT know where they are.
Similarly, a browser can perform automatic behavior culling if it knows which parts of the scene may be affected by a script. For example, imagine a lava lamp sitting on a desk. There is no reason to simulate the motion of the blobs in the lamp if nobody is looking at it-- the lava lamp has a completely self-contained behavior. In an API-only architecture, it would be impossible for the browser to determine that the behavior was self-contained; however, with routes, the browser can easily determine that there are no routes into or out of the lava lamp, and that it can therefore be safely behavior culled. (side note: we do propose flags on Scripts for cases in which it is important that they NOT be automatically culled).
Another disadvantage to this approach is that it allows only re-use of geometry. Because the behaviors must directly load the geometry, it is impossible to "clone" a behavior and apply it to two different pieces of geometry, or to compose together behavior+geometry that can then be re-used several times in the same scene.
The disconnect between the VRML file and the script file will make revision control painful. When the VRML file is changed, the script may or may not have to be changed-- in general, it will be very difficult for a VRML authoring system to maintain worlds with behaviors. If the VRML authoring system cannot parse the scripting language to find out what it referrs to in the VRML file, then it will be impossible for the authoring system to ensure that behaviors will continue to work as the VRML file is edited.
Another alternative is to extend VRML so that it becomes a complete programming language, allowing any behavior to be expressed in VRML.
The main disadvantage to this approach is that it requires inventing Yet Another Scripting Language, and makes implementation of a VRML browser much more complicated. If the language chosen is very different from popular languages, there will be very few people capable of programming it and very little infrastructure (classes, books, etc) to help make it successful.
Writing a VRML authoring system more sophisticated than a simple text editor becomes very difficult if a VRML file may contain the equivalent of an arbitrary program. Creating ANY VRML content becomes equivalent to programming, which will limit the number of people able to create interesting VRML worlds.
The main advantage to an all-VRML architecture is the opportunity for automatic optimizations done by the browser, since the browser knows everything about the world.
The alternative we chose was to treat behaviors as "black boxes" (Script nodes) with well-defined interfaces (routes and fields).
Treating behaviors as black boxes allows any scripting language to be used (Java, VisualBasic, ML, whatever) without changing the fundamental architecture of VRML. Implementing a browser becomes much easier because only the interface between the scene and the scripting language needs to be implemented, not the entire scripting language.
Expressing the interface to behaviors in the VRML file allows an authoring system to intelligently deal with the behaviors, and allows most world creation tasks to be done with a graphical interface. A programming editor only need appear when a sophisticated user decides to create or modify a behavior (opening up the black box, essentially). The authoring system can safely manipulate the scene hierarchy (add geometry, delete geometry, rename objects, etc) without inadvertently breaking connections to behaviors.
The existing VRML composability and scalability features are retained, and because the possible effects of a behavior on the world are known to the browser, most of the optimizations that can be done in an all-VRML architecture can still be done.
This section gives some "thumb-nail" design for how a browser might decide to implement routes. It points out some properties of the routes design that are not obvious at first glance and that can make an implementation of routes simple and efficient.
There doesn't need to be any data copying at all as an event "travels" along a route. In fact, the event doesn't need to "travel" at all-- the ROUTE is really just a re-naming from the eventIn to the eventOut that allows the composability, authorability, extensibility and scalability that are major goals of the Moving Worlds design.
The data for an event can be stored at the source of the event-- with the "eventOut". The "eventIn" doesn't need to store any data, because it is impossible to change an "eventIn"-- it can just point to the data stored at the "eventOut". That means that moving an event along a ROUTE can be as cheap as writing a pointer. In fact, in the VERY common case in which there is no "fan-in" (there aren't multiple eventOut's routed into a single eventIn) NO data copying at all need take place-- the eventIn can just point to eventOut since that eventOut will always be the source of its events.
Exposed fields-- fields that have corresponding eventOut's-- can share their value between the eventOut and the field itself, so very little extra overhead is imposed on "exposed" fields. Highly optimized implementations of nodes with exposed fields could store the data structures needed to support routes separately from the nodes themselves and use a dictionary mapping node pointers to routing structures, adding NO memory overhead for nodes that do not have routes coming into or out of them (which is the common case).
Because the routing structures are known to the browser, many behavior-culling optimizations are possible. A two-pass notification+evaluation implementation will automatically cull out any irrelevant behaviors without any effort on the part of the world creator. The algorithm works by delaying the execution of behaviors until their results are necessary, as follows:
Imagine a TimeSensor that sends alpha events to a Script that in turn sends setDiffuseColor events to an object, to change the object's color over time. Allocate one bit along each of these routes; a "dirty bit" that determines whether or not changes are happening along that route. The algorithm works as follows:
This two-pass "push notification / pull events" algorithm has several nice properties:
Moving Worlds has been carefully designed so that a browser will only need to keep the parts of the VRML scene graph that might be changed. There is a tradeoff between world creators who want to have control over the VRML scene graph structure and browser implementors who also want complete control over the VRML scene graph structure; Moving Worlds is designed to compromise between these two, allowing world creators to impose a particular structure on selected parts of the world while allowing browsers to optimize away the rest.
One example of this is the routing mechanism. Consider the following route:
Shape { appearance Appearance { material DEF M Material { ... } geometry Cube { } } ROUTE MyAnimation.color -> M.setDiffuseColor
A browser implementor might decide not to maintain the Material as a separate object, but instead to route all setDiffuseColor events directly to the relevant shape(s). If the Material was used in several shapes then several routes might need to be established where there was one before, but as long as the visual results are the same the browser implementor is free to do that.
There is a potential problem if some Script node has a pointer to or can get a pointer to the Material node. In that case, there _will_ need to be at least a stand-in object for the Material (that forwards events on to the appropriate shapes) IF the Script might directly send events to what it thinks is the Material node. However, Script nodes that do this MUST set the "directOutputs" flag to let the browser know that it might do this. And the browser will know if any Script with that flag set can get access to the Material node, because the only way Scripts can get access to Nodes is via a field, an eventIn, or by looking at the fields of a node to which it already has access.
World creators can help browsers by limiting what Script nodes have access to. For example, a browser will have to maintain just about the entire scene structure of this scene graph:
DEF ROOT Transform { children [ Shape { ... geometry Sphere{ } }, Transform { ... stuff ... } ] } Script { directOutputs TRUE field SFNode whatToChange USE ROOT ... }
Because the Script has access to the root of the scene, it can get the children of that root node, send them events directly, add children, remove children, etc.
However, this entire scene can be optimized below the Transform, because the browser KNOWS it cannot change:
PROTO ConstTransform [ field MFNode children ] { Transform { children IS children } } DEF ROOT ConstTransform { children [ Shape { ... geometry Sphere{ } }, Transform { ... stuff ... } ] } Script { unknownOutputs TRUE field SFNode whatToChange USE ROOT ... }
Because of the prototype interface, the browser KNOWS that the Script cannot affect anything inside the ConstTransform-- the ConstTransform has NO exposed fields or eventIn's. If the ConstTransform doesn't contain any sources of changes (Sensors or Scripts), then the entire subgraph can be optimized away-- perhaps stored ONLY as a display list for a rendering library, or perhaps collapsed into a "big bag of triangles" (also assuming that there are no LOD's, of course).
The other nice thing about all this is that a PROTO or EXTERNPROTO (or WWWInline, which is pretty much equivalent to a completely opaque prototype) can be optimized independently of everything else, and the less control an author gives over how something might be changed, the more opportunities for optimizations.
The children of a Transform (or other group node) are kind of strange-- they aren't specified like fields in the VRML 1.0 syntax.
Issue: They could be-- they are functionally equivalent to an MFNode field. For example, this:
# Old syntax? Transform { Transform { ... } Transform { ... } }
is equivalent to the slightly wordier:
# New syntax? Transform { children [ Transform { ... } , Transform { ... } ] }
... where "children" is an MFNode field. The issue is whether or not we should keep the VRML 1.0 syntax as a convenient short-hand that means the same as the wordier syntax. The advantages are that it would make the VRML file syntax easier to parse and would eliminate some ambiguities that can arise if fields and nodes are allowed to have the same type names. The disadvantages are that it would make VRML files slightly bigger, is less convenient to type in, and is a change from VRML 1.0 syntax.
In any case, to allow grouping nodes to be used as prototypes and to allow them to be seen in the script API, their children must "really" be an MFNode field. So a Transform might be specified as:
PROTO Transform [ field SFVec3f translation 0 0 0 eventIn SFVec3f setTranslation eventOut SFVec3f translationChanged ... etc for the other transformation fields... field MFNode children [ ] eventIn MFNode setChildren eventOut MFNode childrenChanged ] ...
Specifying events corresponding to the children field implies that the children of a Transform can change-- that the structure of the scene can be changed by behaviors.
Setting all of the children of a Transform at once (using setChildren) is inconvenient; although not strictly necessary, the following might be very useful:
eventIn MFNode addChildren eventIn MFNode removeChildren
Sending an addChildren event to the Transform would add all of the children in the message to the Transform's children. Sending a removeChildren event would remove all of the children in the message (little tiny issue: maybe SFNode addChild/removeChild events would be better?).
The Transform node's semantics were carefully chosen such that the order of its children is irrelevant. That allows a lot of potential for implementations to re-order the children either before or during rendering for optimization purposes (for example, draw all texture-mapped children before all non-texture mapped children, or sort the children by which region of space they're in, etc). The addChildren/removeChildren events maintain this property-- anything using them doesn't need to concern itself with the order of the children.
A previous version of Moving Worlds had a node called "NodeReference" that was necessary to allow nodes to be inserted as children into the scene. Exposing the children of groups as MFNode fields eliminates the need for something like NodeReference.
This section describes the API from the point of view of somebody using VRML to create behaviors. At least the following functionality will be necessary:
Once a Script node has access to an SFNode or an MFNode value (either from one of the Script's fields, or from an eventIn that sends the script a node), we must decide what operations a script can perform on them. A straw-man proposal:
Node search(Node startingNode, ...criteria...) { for all fields of startingNode { if field type is SFNode { Node kid = contents of field if kid matches criteria, return kid else { Node Found = search(kid, criteria) if (Found != NULL) return Found } } else if field type is MFNode, for all values i { Node kid = value[i] if kid matches criteria, return kid else { Found = search(kid, criteria) if (Found != NULL) return Found } } } return NULL }
The VRML 1.0 material specification is more general than currently supported by most 3D rendering libraries and hardware. It is also fairly difficult to explain and understand; a simpler material model will make VRML 2.0 both easier to understand and easier to implement.
First, the notion of per-vertex or per-face materials/colors should be moved from the Material node down into the geometric shapes that support such a notion (such as IndexedFaceSet). Doing this will make colors more consistent with the other per-vertex properties (normals and texture coordinates) and will make it easier for browsers to ensure that the correct number of colors has been specified for a given geometry, etc.
The new syntax for a geometry such as IndexedFaceSet will be:
IndexedFaceSet { exposedField SFNode coord NULL exposedField SFNode color NULL exposedField SFNode normal NULL exposedField SFNode texCoord NULL ... }
A new node, similar to the Normal/TextureCoordinate2 nodes, is needed for the color field. It is often useful to define a single set of colors to function as a "color map" that is used by several different geometries, so the colors are specified in a separate node that can be shared. That node will be:
Color { exposedField MFColor rgb [ ] # List of rgb colors }
The material parameters in the material node would all be single-valued, and we suggest that the ambientColor term be removed:
Material { exposedField SFColor diffuseColor 0.8 0.8 0.8 exposedField SFColor specularColor 0 0 0 exposedField SFColor emissiveColor 0 0 0 exposedField SFFloat shininess 0.2 exposedField SFFloat transparency 0 }
If multiple colors are given with the geometry, then the they either replace the diffuse component of the Material node (if the material field of the Appearance node is not NULL) or act as an "emissive-only" source (if the material field of the Appearance node is NULL).
Issue: The colors in a VRML SFImage field are RGBA-- RGB plus transparency. Perhaps we should allow SFColor/MFColor fields to be specified with 1, 2, 3 or 4 components to be consistent with SFImage. That would get rid of the transparency field of Material, allow transparency per-face or per-vertex, and would allow compact specification of greyscale, greyscale-alpha, RGB, and RGBA colors. However, that might cause problems for the behavior API and would make parsing more complicated.
Another complicated area of VRML 1.0 are all of the possible bindings for normals and materials-- DEFAULT, OVERALL, PER_PART, PER_PART_INDEXED, PER_FACE, PER_FACE_INDEXED, PER_VERTEX, and PER_VERTEX_INDEXED. Not all bindings apply to all geometries, and some combinations of bindings and indices do not make sense.
A much simpler specification is possible that gives equivalent functionality:
IndexedFaceSet { ... field MFInt32 coordIndex [ ] field MFInt32 colorIndex [ ] field SFBool colorPerVertex TRUE field MFInt32 normalIndex [ ] field SFBool normalPerVertex TRUE field MFInt32 texCoordIndex [ ] ... }
The existing materialBinding/normalBinding specifications are replaced by simple booleans that specify whether colors or normals should be applied per-vertex or per-face. If indices are specified, then they are used. If they are not specified, then either the vertex indices are used (if per-vertex normals/colors), OR the normals/colors are used in order (if per-face).
In more detail:
Texture coordinates do not have a PerVertex flag, because texture coordinates are always specified per vertex. The rules for texture coordinates are the same as for per-vertex colors/normals: if texCoordIndex is empty, the vertex indices in coordIndex are used.
IndexedLineSet would add color and colorPerVertex fields, with similar rules to IndexedFaceSet. PointSet would need only a color field (OVERALL color if empty, otherwise color-per-point). The shapes that allow PER_PART colors in VRML 1.0 (Cylinder, Cone) would also only need a color field (PER_PART colors if specified, OVERALL otherwise).
Comparison with VRML 1.0: if all of the possibilities are written out, the only binding missing is the VRML 1.0 PER_VERTEX binding, which ignores the Index fields and just takes colors/normals in order for each vertex of each face. For example, in VRML 1.0 if the coordIndex array contained [ 10, 12, 14, -1, 11, 13, 10, -1 ] (two triangles with one shared vertex), then the PER_VERTEX binding is equivalent to a PER_VERTEX_INDEXED binding with indices [ 0, 1, 2, -1, 3, 4, 5, -1 ] -- that is, each positive entry in the coordIndex array causes another color/normal to be taken from their respective arrays. VRML 1.0 files with PER_VERTEX bindings that are converted to VRML 2.0 will be somewhat larger, since explicit indices will have to be generated.
Last modified: January 30, 1996. This document can be found at http://webspace.sgi.com/moving-worlds/Examples/index.html
This document contains:
January 30, 1996
There are many possible architectures for building multi-user worlds. Some systems will be based on a client server model, others will be based on a peer to peer model possibly using multicast communication, still others will find some middle ground.
One of the goals of the Moving Worlds proposal is to provide the functionality needed to build these different models without dictating which approach should be used. All of the likely models for multi-user worlds would use Moving Worlds features in the same way.
This document highlights the common requirements for all multi-user worlds and points out the parts of the Moving Worlds proposal that address those requirements. An example demonstrates one possible implementation of a multi-user world.
There are two aspects to a shared multi-user world:
The second aspect is a special case of the first in that a special object -- the user representation, or avatar -- is shared among browsers viewing the world. This document discusses the specific case of shared avatars and then generalizes the functionality to support any shared item.
The approach is the same for both the specific and general cases. One or more Script nodes are put into the world, each specifying either a general-purpose or an application-specific applet which carries out the network functionality. For each node containing shared information, either Routes are added between the appropriate events and the Script node, or the shared node itself is passed to the Script (which can then get and set the node's fields). The former approach should be used when information is needed every time the relevant node changes; the latter should be used when shared information is only needed occasionally.
To support shared multi-user worlds, the following basics are needed:
The browser must make available the spatial location of the viewer relative to a known point, so that the scripts controlling the multi-user functionality can send this position to other browsers.
A BoxProximitySensor can indicate where a user is relative to a specific frame of reference. This is useful when the location and orientation are only needed for parts of the world, or if the world is segmented into Zones (see below) with one sensor for each. (A BoxProximitySensor can be attached to the world's root Transform if it's necessary to determine the viewer's location anywhere in the world.) The user's location can be accessed by routing events from the Sensor's position and orientation fields, or by routing an SFNode for the sensor into the script and reading the fields when needed.
It must be possible to uniquely identify the users in the shared space, in order to inform the server or the other browsers who is sending the location information.
This information is application-specific. The application might require friendly nicknames, email addresses, URLs of avatars, or something else entirely. Typically, user identification will be handled by a script reading information from a configuration file.
Once the spatial location is available, the script needs a means to send the information to other browsers or the server. The protocol used to do this depends on the application; for example it could be VRML+ or DIS or VSCP. Rodger: Do we want to mention these ? Mitra: Yes, since they will be meaningful to the readers, and we want to specifically say that we will work with any of these approaches
Moving Worlds supports such communication by allowing scripts to run asynchronously; any of the languages likely to be used for scripting have network functionality. A Java applet, for example, can use the standard Java threading mechanisms to spawn a thread that then uses Java's network classes to send information to the network. Moving Worlds does not need any added API calls to provide this functionality.
Browsers must be able to receive information over the network, either from a server or a peer, and use that information to update the shared scene.
In Moving Worlds, scripts use normal language constructs to handle application-dependent protocols. When a user first connects to a shared world, information about that user's avatar is sent to other browsers sharing that world. This information typically includes a VRML description (or a URL to a VRML description) of the avatar.
The avatar can be put into the world in several ways. For example, one or more nodes could be placed in the world, in the same Transform as the BoxProximitySensor nodes.
Once the avatar is created, the receive-and-display script could provide an SFNode pointer to the avatar's node(s) to allow other scripts to move the avatar around in the scene.
In the case of trying to share state information across the network -- whether that information consists of a boolean indicating whether a light is lit, or the time an animation should start, or the position of a movable object -- a script must be able to read and write this state.
In Moving Worlds this state-sharing is accomplished by routing events between the nodes to be shared and the script that is responsible for propagating the changes.
Mitra: The nasty part - that I've been trying to convince Gavin we need fixed!
A separate eventIn and eventOut (or a separate script) are provided for every state that can change, so that it can be associated with a unique name before sending it over the network. Rodger: it's here that you need to discuss naming in general, where does the unique name come from. Mitra: Exactly! It could be a DEF name, or it could be a name associated with a route, or it could be a name associated with the eventIn on the script node, both the former work with fan in, but are not passed in the event. The latter doesn't work with fan in.
One fix would be to support the "userData" field on routes to allow us to use fan-in without losing the information as to which peice of state was being sent
As information is received off the network, the script can either send an event via a route to change the state, or can use the getValue() method to obtain the value of a field (if the script has an SFNode pointer to the portion of the scene graph being changed).
The goal of this document is to give potential users and implementors of Moving Worlds a feel for how multi-user systems could be built. However, one strength of the Moving Worlds proposal is that it balances simplicity against flexibility. There are thus many interesting and important aspects of distributed systems that this proposal makes no attempt to address. These areas are open, and the choice of how to solve them is left to the system builder. They include:
Separator { DEF bar BoxProximitySensor { } # To detect our own position DEF foo Separator { } # Avatars are added here } DEF baz Script { eventIn SFVec3f position eventIn SFRotation orientation field SFNode avatarRoot IS foo behavior "http://xyz.com/mynetworkprotocol.java" } ROUTE bar.position -> baz.position ROUTE bar.orientation -> baz.orientation
This example needs rewriting to match the final version of the API, but it should be close enough to give the general idea. class MyNetworkProtocol extends VRMLApplet implements Runnable { void start() { # spawn thread to monitor network } void eventIn(Event e) { # send position and orientation events to server } void run { # monitors network, # on receipt of appropriate events from network calls # sends add, delete, change events to avatarRoot. } }
Last modified: January 16, 1996. This document can be found at http://webspace.sgi.com/moving-worlds/Examples/examples.html
This document contains examples to clarify various aspects of the Moving Worlds proposal.
This example has 2 parts. First is an example of a simple VRML 1.0 scene. It contains a red cone, a blue sphere, and a green cylinder with a hierarchical transformation structure. Next is the same example using the Moving Worlds Transforms and leaves syntax.
#VRML V1.0 ascii Separator { Transform { translation 0 2 0 } Material { diffuseColor 1 0 0 } Cone { } Separator { Transform { scaleFactor 2 2 2 } Material { diffuseColor 0 0 1 } Sphere { } Transform { translation 2 0 0 } Material { diffuseColor 0 1 0 } Cylinder { } } }
#VRML V2.0 ascii Transform { tranlation 0 2 0 children [ Shape { appearance Appearance { material Material { diffuseColor 1 0 0 } } geometry Cone { } }, Transform { scaleFactor 2 2 2 children [ Shape { appearance Appearance { material Material { diffuseColor 0 0 1 } } geometry Sphere { } }, Transform { translation 2 0 0 children [ Shape { appearance Appearance { material Material { diffuseColor 0 1 0 } } geometry Cylinder { } } ] } ] } ] }
Moving Worlds has the capability to define new nodes. VRML 1.0 had the ability to add nodes using the fields field and isA keyword. The prototype feature can duplicate all the features of the 1.0 node definition capabilities, as well as the alternate representation feature proposed in the VRML 1.1 draft spec. Take the example of a RefractiveMaterial. This is just like a Material node but adds an indexOfRefraction field. This field can be ignored if the browser cannot render refraction. In VRML 1.0 this would be written like this:
... RefractiveMaterial { fields [ SFColor ambientColor, MFColor diffuseColor, SFColor specularColor, MFColor emissiveColor, SFFloat shininess, MFFloat transparency, SFFloat indexOfRefraction, MFString isA ] isA "Material" }
If the browser had been hardcoded to understand a RefractiveMaterial the indexOfRefraction would be used, otherwise it would be ignored and RefractiveMaterial would behave just like a Material node.
In VRML 2.0 this is written like this:
... PROTO RefractiveMaterial [ field SFColor ambientColor 0 0 0 field MFColor diffuseColor 0.5 0.5 0.5 field SFColor specularColor 0 0 0 field MFColor emissiveColor 0 0 0 field SFFloat shininess 0 field MFFloat transparency 0 0 0 field SFFloat indexOfRefraction 0.1 ] { Material { ambientColor IS ambientColor diffuseColor IS diffuseColor specularColor IS specularColor emissiveColor IS emissiveColor shininess IS shininess transparency IS transparency } }
While this is more wordy, notice that the default values were given in the prototype. These are different than the defaults for the standard Material. So this allows you to change defaults on a standard node. The EXTERNPROTO capability allows the use of alternative implementations of a node:
... EXTERNPROTO RefractiveMaterial [ field SFColor ambientColor 0 0 0 field MFColor diffuseColor 0.5 0.5 0.5 field SFColor specularColor 0 0 0 field MFColor emissiveColor 0 0 0 field SFFloat shininess 0 field MFFloat transparency 0 0 0 field SFFloat indexOfRefraction 0.1 ] http://www.myCompany.com/vrmlNodes/RefractiveMaterial.wrl, http://somewhere.else/MyRefractiveMaterial.wrl
This will choose from one of three possible sources of RefractiveMaterial. If the browser has this node hardcoded, it will be used. Otherwise the first URL will be requested and a prototype of the node will used from there. If that fails, the second will be tried.
Moving Worlds has a new Text node which allows the use of UTF8 characters to display text in any language. For a few languages (like Chinese) a language field is required to give a full specification of the character set to use. Because this field is part of the Text node, the Chinese language would have to be set in every Text block in order for Chinese to be used throughout the file. The prototype feature solves this problem by allowing a custom ChineseText node to be defined.
PROTO ChineseText [ field MSFString string "" ] { Text { language "ch" direction TBRL string IS string } }
note also that the default direction is set to be top-to-tottom for each string and right-to-left for consecutive strings, a common format for Chinese text.
Shuttles and pendulums are great building blocks for composing interesting animations. This shuttle translates its children back and forth along the X axis, from -1 to 1. The pendulum rotates its children about the Y axis, from 0 to 3.14159 radians and back again.
PROTO Shuttle [ field SFFloat rate 1 eventIn SFBool moveRight eventOut SFBool isAtLeft field MFNode children ] { DEF F Transform { children IS children } DEF T TimeSensor { cycleCount = -1 cycleInterval IS rate } DEF S Script { field SFBool right TRUE eventIn SFBool moveRight IS moveRight eventIn SFBool isActive eventOut SFBool isAtLeft IS isAtLeft eventOut SFBool up eventOut SFBool down eventOut SFTime start eventOut SFInt32 resetCount behavior "shuttle.java" } DEF I PositionInterpolator { keys [ 0, 1 ] values [ -1 0 0, 1 0 0 ] } ROUTE T.fraction TO I.set_fraction ROUTE I.outValue TO F.set_translation ROUTE T.isActive TO S.isActive ROUTE S.resetCount TO T.cycleCount } shuttle.java ------------ import "vrml.*" class Shuttle extends Script { SFBool right = (SFBool) getField("right"); SFBool isAtLeft = (SFBool) getEventOut("isAtLeft"); SFBool up = (SFBool) getEventOut("up"); SFBool down = (SFBool) getEventOut("down"); SFTime start = (SFTime) getEventOut("start"); SFInt32 resetCount = (SFInt32) getEventOut("resetCount"); public void moveRight(ConstSFBool value, SFTime ts) { if (value.getValue()) { // want to move Right up.setValue(TRUE); down.setValue(FALSE); start.setValue(ts.getValue()); } else { // want to move Left up.setValue(FALSE); down.setValue(TRUE); start.setValue(ts.getValue()); } } public void isActive(SFBool value, SFTime ts) { // if this is false (transitioned from active to inactive) // we can send our isAtLeft event if (!value.getValue()) { right.setValue(!right.getValue()); isAtLeft.setValue(!right.getValue()); resetCount.setValue(1); // stop TimerSensor } } } PROTO Pendulum [ field SFFloat rate 1 eventIn SFBool moveCW eventOut SFBool isAtCCW field MFNode children ] { DEF F Transform { children IS children } DEF T TimeSensor { cycleCount = -1 cycleInterval IS rate } DEF S Script { field SFBool CW TRUE eventIn SFBool moveCW IS moveCW eventIn SFBool isActive eventOut SFBool isAtCCW IS isAtCCW eventOut SFBool up eventOut SFBool down eventOut SFTime start eventOut SFInt32 resetCount behavior "pendulum.java" } DEF I RotationInterpolator { keys [ 0, 1 ] values [ 0 1 0 0, 0 1 0 3.14159 ] } ROUTE T.fraction TO I.set_fraction ROUTE I.outValue TO F.set_rotation ROUTE T.isActive TO S.isActive ROUTE S.resetCount TO T.cycleCount } pendulum.java ------------ import "vrml.*" class Pendulum extends Script { SFBool CW = (SFBool) getField("CW"); SFBool isAtCCW = (SFBool) getEventOut("isAtCCW"); SFBool up = (SFBool) getEventOut("up"); SFBool down = (SFBool) getEventOut("down"); SFTime start = (SFTime) getEventOut("start"); SFInt32 resetCount = (SFInt32) getEventOut("resetCount"); public void moveCW(ConstSFBool value, SFTime ts) { if (value.getValue()) { // want to move CW up.setValue(TRUE); down.setValue(FALSE); start.setValue(ts.getValue()); } else { // want to move CCW up.setValue(FALSE); down.setValue(TRUE); start.setValue(ts.getValue()); } } public void isActive(SFBool value, SFTime ts) { // if this is false (transitioned from active to inactive) // we can send our isAtCCW event if (!value.getValue()) { CW.setValue(!CW.getValue()); isAtCCW.setValue(!CW.getValue()); resetCount.setValue(1); // stop TimerSensor } } }
In use, the Shuttle can have its isAtRight output wired to its moveLeft input to give a continuous shuttle. The Pendulum can have its isAtCCW output wired to its moveCW input to give a continuous Pendulum effect. Note the initial value of TimeSensor.cycleCount is -1. This causes the TimeSensor to start immediately. CycleCount is set to 1 after the first cycle to take control of the TimeSensor.
Robots are very popular in in VRML discussion groups. Here's a simple implementation of one. This robot has very simple body parts: a cube for his head, a sphere for his body and cylinders for arms (he hovers so he has no feet!). He is something of a sentry - he walks forward, turns around, and walks back, forever. This makes liberal use of the Shuttle and Pendulum above.
DEF Walk Shuttle { rate 10 children [ DEF Turn Pendulum { children [ # The Robot Shape { geometry Cube { } # head }, Transform { scaleFactor 1 5 1 translation 0 -5 0 children [ Shape { geometry Sphere { } } ] # body }, DEF Arm Pendulum { children [ Transform { scaleFactor 1 7 1 translation 1 -5 0 children [ Shape { geometry Cylinder { } } ] } ] }, # duplicate arm on other side and flip so it swings # in opposition Transform { rotation 0 1 0 3.14159 translation 10 0 0 children [ USE Arm ] } ] } ] } # hook up the sentry. The arms will swing infinitely. He walks # along the shuttle path, then turns, then walks back, etc. ROUTE Arm.isAtCCW TO Arm.moveCW ROUTE Walk.isAtLeft TO Turn.moveCW ROUTE Turn.isAtCCW TO Walk.moveRight
The Moving Worlds definition of WWWAnchor does not have the map field from VRML 1.0. This is because this field was of limited value. The 1.0 map field tried to duplicate the imagemap facility of HTML. But what was really needed was the texture coordinate. Well Moving Worlds can fix this with this PROTO for a better WWWAnchor. This also adds the target field which has been so popular lately.
PROTO TextureAnchor [ field SFString name "" field SFString target "" field MFNode children [ ] { Group { children [ DEF CS ClickSensor { }, Group { children IS children } ] } DEF S Script { field SFString name IS name field SFString target IS target eventIn SFVec2f hitTexCoord behavior "TextureAnchor.java" } ROUTE CS.hitTexCoord TO S.hitTexCoord } TextureAnchor.java ------------------ import "vrml.*" class TextureAnchor extends Script { SFString name = (SFString) getField("name"); SFString target = (SFString) getField("target"); public void hitTexCoord(ConstSFVec2f value, SFTime ts) { // construct the string String str; sprintf(str, "%s?%g,%g target=%s", name.getValue(), value.getValue()[0], value.getValue()[1], target.getValue()); Browser.loadURL(str); } }
Here is a simple example of how to do simple animation triggered by a click sensor. It uses an EXTERNPROTO to include a Rotor node from the net which will do the actual animation.
EXTERNPROTO Rotor [ eventIn MFFloat Spin field MFNode children ] "http://somewhere/Rotor.wrl" # Where to look for implementation PROTO Chopper [ field SFFloat maxAltitude 30 field SFFloat rotorSpeed 1 ] { Group { children [ DEF CLICK ClickSensor { }, # Gotta get click events Shape { ... body... }, DEF Top Rotor { ... geometry ... }, DEF Back Rotor { ... geometry ... } ] } DEF SCRIPT Script { eventIn SFBool startOrStopEngines field maxAltitude IS maxAltitude field rotorSpeed IS rotorSpeed field SFNode topRotor USE Top field SFNode backRotor USE Back scriptType "java" behavior "chopper.java" } ROUTE CLICK.isActive -> SCRIPT.startOrStopEngines } DEF MyScene Group { DEF MikesChopper Chopper { maxAltitude 40 } } chopper.java: ------------- import "vrml.*" public class Chopper extends Script { SFNode TopRotor = (SFNode) getField("topRotor"); SFNode BackRotor = (SFNode) getField("backRotor"); float fRotorSpeed = ((SFFloat) getField("rotorSpeed")).getValue(); boolean bEngineStarted = FALSE; public void startOrStopEngines(ConstSFBool value, SFTime ts) { boolean val = value.getValue(); // Don't do anything on mouse-down: if (val == FALSE) return; // Otherwise, start or stop engines: if (bEngineStarted == FALSE) { StartEngine(); } else { StopEngine(); } } public void SpinRotors(fInRotorSpeed, fSeconds) { MFFloat rotorParams; float[] rp = rotorParams.getValue(); rp[0] = 0; rp[1] = fInRotorSpeed; rp[2] = 0; rp[3] = fSeconds; TopRotor.postEventIn("Spin", rotorParams); rp[0] = fInRotorSpeed; rp[1] = 0; rp[2] = 0; rp[3] = fSeconds; BackRotor.postEventIn("Spin", rotorParams); } public void StartEngine() { // Sound could be done either by controlling a PointSound node // (put into another SFNode field) OR by adding/removing a // PointSound from the Separator (in which case the Separator // would need to be passed in an SFNode field). SpinRotors(fRotorSpeed, 3); bEngineStarted = TRUE; } public void StopEngine() { SpinRotors(0, 6); bEngineStarted = FALSE; } }
Moving Worlds has great facilities to put the viewer's camera under control of a script. This is useful for things such as guided tours, merry-go-round rides, and transportation devices such as busses and elevators. These next 2 examples show a couple of ways to use this feature.
The first example is a simple guided tour through the world. Upon entry, a guide orb hovers in front of you. Click on this and your tour through the world begins. The orb follows you around on your tour. Perhaps a PointSound node can be embedded inside to point out the sights.
Group { children [ <geometry for the world>, DEF GuideTransform Transform { children [ DEF TourGuide Viewpoint { }, DEF StartTour ClickSensor { }, Shape { geometry Sphere { } }, # the guide orb ] } ] } DEF GuidePI PositionInterpolator { keys [ ... ] values [ ... ] } DEF GuideRI RotationInterpolator { keys [ ... ] values [ ... ] } DEF TS TimeSensor { cycleInterval 60 } # 60 second tour DEF S Script { field SFNode viewpoint USE TourGuide eventIn SFBool active eventIn SFBool done eventOut SFTime start behavior "GuidedTour.java" } ROUTE StartTour.isActive TO S.active ROUTE S.start TO TS.startTime ROUTE TS.isActive TO S.done ROUTE TS.fraction TO GuidePI.set_fraction ROUTE TS.fraction TO GuideRI.set_fraction ROUTE GuidePI.outValue TO GuideTransform.set_translation ROUTE GuideRI.outValue TO GuideTransform.set_rotation GuidedTour.java: ------------- import "vrml.*" public class GuidedTour extends Script { SFTime start = (SFTime) getEventOut("start"); SFNode viewpoint = (SFNode) getField("viewpoint"); public void active(ConstSFBool value, SFTime ts) { if (value.getValue()) { // start tour Browser.bindViewpoint(viewpoint.getValue()); start.setValue(ts.getValue()); } } public void done(ConstSFBool value, SFTime ts) { if (!value.getValue()) { // end tour Browser.unbindViewpoint(); } } }
Here's another example of animating the camera. This time it's an elevator to ease access to a multistory building. For this example I'll just show a 2 story building and I'll assume that the elevator is already at the ground floor. To go up you just step inside. A BoxProximitySensor fires and starts the elevator up automatically. I'll leave call buttons for outside the elevator, elevator doors and floor selector buttons as an exercise for the reader!
Group { children [ DEF ETransform Transform { children [ DEF EViewpoint Viewpoint { }, DEF EProximity BoxProximitySensor { size 2 2 2 }, <geometry for the elevator, a unit cube about the origin with a doorway>, ] } ] } DEF ElevatorPI PositionInterpolator { keys [ 0, 1 ] values [ 0 0 0, 0 4 0 ] # a floor is 4 meters high } DEF TS TimeSensor { cycleInterval 10 } # 10 second travel time DEF S Script { field SFNode viewpoint USE EViewpoint eventIn SFBool active eventIn SFBool done eventOut SFTime start behavior "Elevator.java" } ROUTE EProximity.isActive TO S.active ROUTE S.start TO TS.startTime ROUTE TS.isActive TO S.done ROUTE TS.fraction TO ElevatorPI.set_fraction ROUTE ElevatorPI.outValue TO ETransform.set_translation Elevator.java: ------------- import "vrml.*" public class Elevator extends Script { SFTime start = (SFTime) getEventOut("start"); SFNode viewpoint = (SFNode) getField("viewpoint"); public void active(ConstSFBool value, SFTime ts) { if (value.getValue()) { // start elevator Browser.bindViewpoint(viewpoint.getValue()); start.setValue(ts.getValue()); } } public void done(ConstSFBool value, SFTime ts) { if (!value.getValue()) { // end tour Browser.unbindViewpoint(); } } }